Increasing the database size when scanning

Virtual Machine: 16core, 32 GB memory, ssd /(speed ~2000MB/s), raid 6 /mnt/ncdata (speed ~700MB/s)
extcloud version (19.0.2) :
Operating system and version (Ubuntu 20.04) :
Apache or nginx version ( Apache 2.4.25) :
PHP version (PHP-FPM 7.4) :
PostgreSQL 12 (DB 2GB)

  • Redis Memcache (latest stable version from PECL)
  • APCu local cache (latest stable version from PECL)
  • PHP-igbinary (latest stable version from PECL
  • PHP-smbclient (latest stable version from PECL)
    The installation was performed using an official script

The issue you are facing:

Every time you scan the files with the command occ files:scan --all increases the size of the database, although the files do not change! How can I get rid of or clear the database of old data and get rid of " swelling".

Although my installation is different from that of yours, I do agree with you that running the command files:scan --all in occ inflates the size of the database even when the number of files within my installation remains the same. Due to the issues that I have faced, I have so far determined that running the said occ command floods the database table occ_files_locks with entries and unleashes a flurry of PHP errors.

It would be great if the very knowledgeable people on the forum shares their experiences so that we could isolate the potential causes of your posted issue. Here are the details of my installation:

The Nextcloud package is installed, using an archive file, on a shared hosting server with some folders connected to external storage.
Nextcloud version: 18.0.4, 18.0.6 => 18.0.7
Operating system and version: Ubuntu 18.04
Apache or nginx version: Apache 2.4.29
PHP version: 7.4.3

The issue you are facing: refer to the first paragraph of my post in this topic.

The output of your Nextcloud log in Admin > Logging :


Is this the first time you’ve seen this error? N

The output of your config.php file in /path/to/nextcloud (make sure you remove any identifiable information!):

$CONFIG = array (
  'instanceid' => 'ocnssr8i2s5s',
  'passwordsalt' => 'unsalted',
  'secret' => 'notso',
  'trusted_domains' =>
  array (
    0 => 'domain.tld',
  'datadirectory' => '/home/ssh-user/domain.tld/nextcloud/data',
  'dbtype' => 'mysql',
  'version' => '',
  'overwrite.cli.url' => 'https://domain.tld/nextcloud',
  'dbname' => 'honeysuckle_squirt',
  'dbhost' => '',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'mysql.utf8mb4' => true,
  'dbuser' => 'meee',
  'dbpassword' => 'pass12345',
  'installed' => true,
  'maintenance' => false,

The output of your Apache/nginx/system log in /var/log/____ :


If you don’t interfere with the files in the main data storage, there is no need to run the file scan command. If you need to access files with other processes, do this with external storage.

The original poster uses redis, so locking table shouldn’t be the issue. Can you specify which table increases in size? It would be interesting to know which entries were added…

Already reviewed a section of the admin manual for Nextcloud 18 but I still do not understand when any administrator needs to run the files:scan --all command in occ. To clarify, some data folders within my installation have been connected to external storage and my users will eventually access their data using a mobile app, when should (be the best time for me) to run the said occ command?

My installation uses OPcache whereupon the database table occ_files_locks becomes (even more) flooded with entries every time the command files:scan --all in occ is used. Is this to be expected?

If you don’t change manually the files in the data folder and only access through Nextcloud webinterface, clients or webdav, there is no need to run this command. External storage is different, you normally can specify the default behavior of external storage (e.g. if scanned upon each access).

For file locking cache, you need to install redis. This reduces the load on the database enormously and therefore speeds up file transfers.