[SOLVED] Failed to open stream: No space left on device

I was in the process of requesting assistance when I solved the problem, I’m leaving this here to help other unfortunates - including me in 6 months when I’ve forgotten everything :sweat_smile:

Nextcloud version (eg, 20.0.5): 25.0.13
Operating system and version (eg, Ubuntu 20.04): Debian 10
Apache or nginx version (eg, Apache 2.4.25): nginx version: nginx/1.23.1
PHP version (eg, 7.4): 8.1

The issue you are facing:

Users are unable to save documents or copy documents via webdav, but are able to open documents via webdav and open/save documents via Collabora office.

The logs indicate; Sabre\DAV\Exception and Sabre\DAV\Exception\BadRequest

The server is set to do automatic security updates only. There were updates for php8.2 & 8.3 in the last set of unattended upgrades, but nothing for php8.1 which nextcloud is using.

Errors indicate that the disk is out of space, but this is not the case; 45% free space.

Is this the first time you’ve seen this error? (Y/N): Yes

Steps to replicate it:

  1. Open file, edit file, save file
  2. Panic
  3. Consider your life choices

The output of your Nextcloud log in Admin > Logging:

Error while copying file to target location (copied: -1 byte, expected filesize: 99 bytes)
"message":"Expected filesize of 0 bytes but read (from Nextcloud client) and wrote (to Nextcloud storage) -1 byte. Could either be a network problem on the sending side or a problem writing to the storage on the server side.","userAgent":"RaiDrive/2023.9.90.0","version":"25.0.13.2","exception":{"Exception":"Sabre\\DAV\\Exception\\BadRequest","Message":"Expected filesize of 0 bytes but read (from Nextcloud client) and wrote (to Nextcloud storage) -1 byte. Could either be a network problem on the sending side or a problem writing to the storage on the server side."

The written “-1” bytes I think is important, as I’ve never seen that error even when there actually has been a problem writing a file or a network issue. To be clear, this IS NOT a Nextcloud issue… The data storage disk for NC in this case also had a backuppc pool with many many backups and the raid array had simply run out of inodes. The fix was to reduce the number of backup snapshots and let backuppc remove a bunch of files.

The number of inodes is set as part of the disk geometry & cannot be changed after the fact… So dd-ing the drive to a new disk and growing it to fill the available space will not fix this. You will need to remove some files (delete or move) to free up some inodes to get the disk functional again.

df -i is your saviour here as it will show your inode usage in the same way as df shows disk space used.
e.g.

Filesystem        Inodes     IUsed    IFree IUse% Mounted on
...
/dev/md1     244187136 244142463 44713   100% /media/storage
...

backuppc creates masses of hard links in its backup pool which is why I’d run out of inodes. If you don’t know where your inodes have gone try this;

du --inodes -d 3 /media/storage | sort -n | tail

change /media/storage to / or whatever volume you’re investigating. This will probably take some time so be prepared to wait a while… but it should show where your inodes have gone.

Good luck :slight_smile:

2 Likes