Disk has 197G, but when it reaches 188G it is considered 100% full

Nextcloud version: 17.0.2
Operating system and version: Ubuntu 18.04.4 LTS
Apache or nginx version: Apache/2.4.29
PHP version: PHP 7.2.24

Hello everyone, the disk I mounted to store the data of Nextcloud users has 197G, but when it reaches 188G it is considered 100% full.

Has anyone been through this or know how I can solve it?
Thank you!

Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/vg2-vm--113--disk--2   20G  5.1G   14G  28% /
**/dev/loop0                        197G  188G     0 100% /dados-nextcloud**
none                              492K     0  492K   0% /dev
tmpfs                              32G     0   32G   0% /dev/shm
tmpfs                              32G  112K   32G   1% /run
tmpfs                             5.0M     0  5.0M   0% /run/lock
tmpfs                              32G     0   32G   0% /sys/fs/cgroup

See screenshot: http://prntscr.com/uf6oll

This is because hard drives are not 197 g they are a little less. like tvs, when you buy a 12 in tv it is accuracy 11.5 in

1 Like

It’s a general linux question about the du command. You can find that current processes still hold deleted data (so it’s not available yet), not sure if there can be an issue with the sector size (so with small files, they use at least the size of 1 sector and if they are much smaller, they block more storage than their size).

1 Like

Hi, thanks for your reply and for contributing.
I had heard about TVs before, but I confess that about hard drives this is the first time I’ve heard them.

1 Like

Hi, thanks for your reply and for contributing.
To check for frozen processes keeping “archives” should I use the correct “Lsof”?

But the thesis that addresses many small files that are smaller than the sector size defined on the disk makes a lot of sense.

When creating and ext2/3/4 filesystem, the amount of reserved blocks for privileged processess is set to 5% as default. The reserved blocks reduce the available block count of df:

dummy@dummy:~# df -h | grep -E "(Filesystem|backup)"
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/backup         458G  280G  165G  63% /media/backup
dummy@dummy:~# tune2fs -m 30 /dev/mapper/backup
tune2fs 1.44.5 (15-Dec-2018)
Setting reserved blocks percentage to 30% (36627764 blocks)
dummy@dummy:~# df -h | grep -E "(Filesystem|backup)"
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/backup         458G  280G   39G  88% /media/backup

I suspect that this is the case for your **/dev/loop0, since roughly 10GB are ‘missing’ on that drive, which would be 5% of 200GB.

Maybe this does also apply for other filesystems as well.

2 Likes

Hello, your theory makes a lot of sense. At this point the space has already been released in the unit, but next time I will observe this detail.
Thanks for the contribution!

1 Like

Also, for drive manufacturers, k=1000, M=10^6, G=10^9 etc. This makes their drives look larger compared to the the other method which is k=1024, M=1024^2 and G=1024^3.

One of these can be known as Ki, Mi and Gi but I can never remember which.

You need to check which the df and du commands uses.

[edit]
You need to use the -H switch with df to get values in SI units (1000, 1,000,000 etc), or --si with du. This would then compare better with the manufacturers’ claims.
[/edit]