[NextCloudPi] storage limit 15 GB reached although using 240 GB SSD

Since about August 2018 I run a NextCloudPi instance. After a few weeks after installation, I changed from 16 GB internal SD usage to 240 GB SSD usage and activated the following features using the NCP GUI:

  • auto NC updates
  • auto NC apps updates (I don’t use any, though)
  • data dir at: /media/SSD240GB/ncdata
  • database dir at: /media/SSD240GB/ncdatabase
  • auto backups at: /media/SSD240GB/ncp-backups

With these settings, I successfully managed to avoid hitting any SD write cycle limitations for about 2,5 years with no issues so far.

One thing just kept bothering me, though: when I changed from internal SD usage (16 GB) to USB SSD usage (240 GB) back in 2018, the “available” amount of storage memory didn’t change from 16 GB to 240 GB within the NCP settings and status menus. I was not concerned, though, as I thought it might update some time in the future.

Now, after uploading a set of “big” pictures on one of the clients, I managed to hit the former SD storage limitation of ~15 GB. And the connected client refuses to upload the files, issuing the simple message:

Connection closed

within the Client’s GUI.


The NCP status page shows:

NextCloudPi version: v1.35.0
NextCloudPi image: NextCloudPi_03-04-19
distribution: Raspbian GNU/Linux 9 \n \l
automount: no
USB devices: sda
datadir: /media/SSD240GB/ncdata
data in SD: yes
data filesystem: ext2/ext3 (the SD is ext4)
data disk usage: 14G/15G
rootfs usage: 14G/15G
swapfile: /var/swap
dbdir: /media/SSD240GB/ncdatabase
Nextcloud check: ok
Nextcloud version:
HTTPD service: up
PHP service: up
MariaDB service: up
Redis service: up
Postfix service: up
internet check: ok
port check 80: open
port check 443: open
interface: eth0
certificates: …
NAT loopback: no
uptime: 30days

OS: Linux 4.19.66-v7+ armv7
Prozessor: ARMv7 Processor rev 4 (v7l) (4 cores)

PHP Version: 7.2.34

Database: mysql, v10.1.48, size: 92,3 MB

Is this the first time you’ve seen this error? (Y/N): Y

Steps to replicate it:

  1. upload any file with about 20 MB size on any client

The output of your Nextcloud log in Admin > Logging:

[ the protocol does not finish loading using NCP GUI ]

Could you share output of

sudo df -hT


cat /etc/fstab


ls -lh /media

Your using an old image, my advise is to:

  • backup first then upgrade to 10,
  • or backup first, install from latest image, then restore.

Hi @OliverV , hi all,

thanks for your efforts so far. I’d like to post the answers to the previous message.

pi@RPi-11:~ $ sudo df -hT
Dateisystem    Typ      Größe Benutzt Verf. Verw% Eingehängt auf
/dev/root      ext4       15G     14G   23M  100% /
devtmpfs       devtmpfs  459M       0  459M    0% /dev
tmpfs          tmpfs     464M       0  464M    0% /dev/shm
tmpfs          tmpfs     464M     47M  417M   11% /run
tmpfs          tmpfs     5,0M    4,0K  5,0M    1% /run/lock
tmpfs          tmpfs     464M       0  464M    0% /sys/fs/cgroup
/dev/mmcblk0p1 vfat       44M     23M   21M   52% /boot
tmpfs          tmpfs      93M       0   93M    0% /run/user/1000
pi@RPi-11:~ $ cat /etc/fstab
PARTUUID=df45002e-01  /boot           vfat    defaults          0       2
PARTUUID=df45002e-02  /               ext4    defaults,noatime  0       1
# a swapfile is not a swap partition, no line here
#   use  dphys-swapfile swap[on|off]  for that
pi@RPi-11:~ $ ls -lh /media
insgesamt 4,0K
drwxr-xr-x 5 root root 4,0K Mär  8  2019 SSD240GB

And, yes, the original installation is old. but as I have “auto updates” activated, this should not be a problem, I hope. the server version and the ncp versions seem to be fine. I might upgrade shortly, though.

What were the exact steps you took to mount your new drive?
From provided output, it is not mounted to the filesystem properly.
Which is why it is not being used.

You can use nc-automount or mount manually using this howto

No space left, is why you can no longer access.

What were the exact steps you took to mount your new drive?

unfortunately, I can’t remember :frowning:

From provided output, it is not mounted to the filesystem properly. Which is why it is not being used.

when I saw the output, I remembered that I had some strange issues when trying to mount my SSD in the first place some day in 2019 after updating to a (then) new version.

I had a system crash when a rain drop killed the Raspberry Pi NC server hardware. After that, I just changed the SD card as well as the SSD to another Raspberry Pi hardware. So, everything worked fine again and I was still missing the fact that the SSD was not accessed at all.

No space left, is why you can no longer access.

(1) I will try to open the gate again (indeed, there is currently no access via any client) by trying to delete some big files directly on the server.

(2) then I will try to properly mount the SSD using the link you provided.

Then I will report within this thread again.


Done. Learned some futher basics… :wink:

I now successfully mounted my external SSD. It is now visible using df and lsblk . I created the sub-directory ncdata using btrfs submodule and will use ncp’s nc-datadir feature to transfer the data from the old (SD) to the new (SSD) location.

One last question: I’d like to switch locations of the two directories ncdatabase and ncp-backups as well. Do I do this in the same way, or is there another way to do it and thus avoid complications?

And another question: the unused ncdata dicrectory is still filled with files and folders. As I want to free some space on my SD, I would remove the ncdata folder on my SD card, right?