tflidd
February 8, 2021, 9:09am
11
Not sure about your current database, but in principle for your setup it looks pretty decent. Just if you want to backup the tables, you can compress the dumps directly, so it can use considerably less memory. However, the whole process might take a long time.
There are a few more topics about problems with the filecache table and external storage:
Nextcloud version (eg, 18.0.2): 18.0.8
Operating system and version (eg, Ubuntu 20.04): Debian 9
Apache or nginx version (eg, Apache 2.4.25): Apache/2.4.25
PHP version (eg, 7.1): 7.3 FPM
The issue you are facing:
In the last 3 day, the OC_filecache have grown uncontrollably.
My nextcloud instance have always (4Y) been running on a small HDD, and the /var/lib/mysql mount never outgrow a gigabit. 3 day ago i received a notification from netdata telling me that my disk was > 95%, i checked …
I’m dealing with a situation where we have removed a very large number of files from a user’s Nextcloud space and we now have to clean up the oc_filecache table to reflect this change.
There are around 400 million rows in total to be removed from the oc_filecache table. I’m running an ‘occ files:scan username’ process to do the removal. While the files:scan is running, I’m running a process to monitor the number of oc_filecache rows, and I’m running innotop in T mode to watch tasks to monitor…
opened 01:33AM - 07 Sep 17 UTC
enhancement
1. to develop
feature: filesystem
feature: external storage
- create and link to external smb storage on linux samba server
- samba server … should have "follow symlinks" enabled (this is samba default)
- create a symlink in the target smb storage location that loops back to itself
- nextcloud file scanner will follow this link indefinitely
I suspect we've been suffering from this problem for a long time now, but I only recently clued in to the problem after our upgrade to NC-12.0.2 when investigating why our oc_filecache table had grown so large (500 million rows!).
Once I noticed the filecache entries corresponding to the smb file path loop, I was able to remove the offending directory structure, and now I'm running 'occ files:scan' processes to clean up the mess. So far oc_filecache size has been reduced from ~500 million to 420 million rows, and I expect this to drop much more.
Please let me know if more information is needed.