No disk space left on server, please contact the server administrator to continue

Hi, so i ran into this problem but i can’t find any useful logs, the only thing i find is something about ssl handshakes, which do not seem to be related with the problem

here is the log

The entry I found in your log is:

wsd-00029-00041 10:13:11.988222 [ websrv_poll ] WRN File system of [/opt/lool/child-roots/.] is dangerously low on disk space

I don’t have that folder, that’s why i ignored it :wink:

EDIT: hmm awkward, i did docker system prune (this deletes all unused docker containers, cleans cache and other stuff that might be important, use with caution)

and it started working again, but df -h did not complain with no space left (most of disks are at 80%, only one near 100% is boot partition)

Docker is always a good candidate to eat up the disk space.

You can try docker system prune --volumes --force.

or find out the folder with ncdu
I’m sure it’s /var/lib/docker

hmm the same problem appeared again but now it says that the reclaimed space is 0B

Install ncdu and find out the folder which is eating your disk space.

The thing is, the root disk (/) has 5GB free according to df (i know it is not much, but it should be enough, no?)

Of course!

Hmm, so i deleted some stuff and now i have 18GB free, it works, maybe there is some border like 10GB where it refuses to work?

No. I had it running with 0 bytes free for example. It is running in the memory.

RAM is not also a problem (unless docker limits ram usage), i got 10GB free of RAM, pretty strange, i just recreated the docker image and now it is working again, hopefully for a long time :smiley:

You just need to prune docker from tim to time.

That’s the thing, it saied that it saved 0B (no nothing was deleted) :frowning:

Did you execute that command?
docker system prune --volumes --force


$ sudo docker system prune --volumes --force
Total reclaimed space: 0B

That command brought me more than 20 GB disk space back.

From time to time i stop the docker container like that:

docker ps -a
docker stop xxxxxxxxxx
docker rm xxxxxxxxxx

and start it again.

Update to this topic that helped me rubber debug same issue.

NC server
seprated collabora server

Plenty of space left on both globally.
#df -h and #ncdu to find that other app on collabora server was filling the var/logs up to 99% of the partition
Quick an dirty #find //var/log/* -mtime +30 -exec rm {} ;
–> where 30 the number of days you want to keep the logs

70% storage free and collabora running again