When i do a files:scan, it seems to me it is very long and i’m wondering if this is coming from a lack of performance from nextcloud or if i can optimize my software configuration.
Example :
Scanning a user folder (all files were put there by ftp so they were all unknown from nextcloud)
For instance, it scan one file every 131ms or 7.6 files per second.
What took the time ? What nextcloud do during scan ? Analysing content of each file ? or just putting their path in database ?
Should i disable lock file system (if this is what took time) ?
There are lots of database requests when scanning files. Check both redis and mysql cpu and disk usage. Mysql could be a bottleneck especially on spinning disks compared to SSD. What filesystem are you using?
SSD means Solid State Disk. It’s a flash memory compared to a normal HDD which has a rotating disk with moving heads. SSD’s are ten or hundreds of times faster than HDD’s for database use.
EXT4 is the filesystem on the disk which stores your files. There are many guides online that explains how to tune your performance. One simple tip is to mount your disk with the noatime,nodirtime options.
I think if you look at iotop -o you’ll see that your disk is fully saturated. Look at the IO % column.
Filesystem is ext4, and this is not a SSD, it is a HDD
Thank you, i’ll check optimisation of disk but i doubt it is the real reason of slowness. A full copy (with cp) of these files only last a few minutes, not 16 hours.
Does anybody knows what the scan do with each file ? Generating etag and inserting file path should not take so long. There must be something else.