Moving files is soooo slow

Nextcloud version: Nextcloud Hub 8 (29.0.0)
Operating system and version (eg, Ubuntu 20.04): Linux 6.1.74-production+truenas x86_64
Apache or nginx version (eg, Apache 2.4.25): nginx 1.25.4
PHP version (eg, 7.4): 8.2.19

The issue you are facing:
I recently started using Nextcloud and am organizing my files in folders.
I had a folder with tens of thousands of photos.
Yesterday I wanted to move about 10000 of them to a different folder using the GUI.
It took 4 hours!
I had to keep the computer active, because it’s not a background task, and on the charger because it drained the battery heavily…

I’m used to moving files using ssh and there’s it’s instant. But I learned on the forum I should not be moving files this way because it breaks the database.

I would expect moving files to be just updating some meta data “you no longer belong to folder x but to folder y”, but looking at the battery drainage and slow speed it does this, it looks as if every file needs to be downloaded locally and uploaded again to the new folder?

Steps to replicate it:

  1. Go to a folder using the GUI
  2. Select 10000 files. They all are between 4 and 10 MB
  3. Click the move or copy button, select a new folder and click the move button
  4. Wait in pain

The output of your config.php file in /path/to/nextcloud (make sure you remove any identifiable information!):

root@nextcloud-78bbb5844-l7j9z:/var/www/html/config# cat config.php
<?php
$CONFIG = array (
  'htaccess.RewriteBase' => '/',
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'apps_paths' => 
  array (
    0 => 
    array (
      'path' => '/var/www/html/apps',
      'url' => '/apps',
      'writable' => false,
    ),
    1 => 
    array (
      'path' => '/var/www/html/custom_apps',
      'url' => '/custom_apps',
      'writable' => true,
    ),
  ),
  'memcache.distributed' => '\\OC\\Memcache\\Redis',
  'memcache.locking' => '\\OC\\Memcache\\Redis',
  'redis' => 
  array (
    'host' => 'nextcloud-redis',
    'password' => 'redacted',
    'port' => 6379,
  ),
  'overwritehost' => '10.25.9.4:9001',
  'overwriteprotocol' => 'https',
  'trusted_proxies' => 
  array (
    0 => '172.17.0.0/16',
    1 => '172.16.0.0/16',
    2 => '127.0.0.1',
  ),
  'upgrade.disable-web' => true,
  'passwordsalt' => 'redacted',
  'secret' => 'redacted',
  'trusted_domains' => 
  array (
    0 => 'localhost',
    1 => '10.25.9.4',
    2 => '127.0.0.1',
    3 => 'localhost',
    4 => 'nextcloud-init-sync.lock',
    5 => 'nextcloud',
  ),
  'datadirectory' => '/var/www/html/data',
  'dbtype' => 'pgsql',
  'version' => '29.0.0.19',
  'overwrite.cli.url' => 'https://localhost',
  'dbname' => 'nextcloud',
  'dbhost' => 'nextcloud-postgres:5432',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'dbuser' => 'oc_nikotine',
  'dbpassword' => 'redacted',
  'installed' => true,
  'instanceid' => 'ocslvu7qisj9',
  'twofactor_enforced' => 'false',
  'twofactor_enforced_groups' => 
  array (
  ),
  'twofactor_enforced_excluded_groups' => 
  array (
  ),
);

And because I think you will ask for this as well:

root@nextcloud-78bbb5844-l7j9z:/var/www/html/config# php -i |fgrep memory
memory_limit => 1024M => 1024M
Collecting memory statistics => No
opcache.memory_consumption => 1024 => 1024
opcache.preferred_memory_model => no value => no value
opcache.protect_memory => Off => Off

I am running Nextcloud in Truenas Scale. This is the storage configuration:


The ixVolumes are on an SSD.
The User Data Storage is a zfs pool on spinning disk (4 disks, 2x mirror, 2 wide), this is certainly no bottleneck…

just wondering why moving files manually (i.e. via ssh) is discouraged?

if after moving the files you run occ files:scan --all that shouldn’t do any harm?

Probably I could yes.
So no other way to speed up moving files using the GUI?

beef up specs of your VM will speed it up a little bit but it will never be fast.

If I’m not totally mistaken moving files for NC equals to download file, upload it to new location and then delete it in old location.

You can verify this by checking network traffic on your Laptop while moving a couple of files.

That’s what I suspected, it doesn’t make sense to me.

For object store (e.g. s3) it’s indeed only updating the metadata.
For local storage, the files in your home directory also moved on the disk.

We are using webdav for our clients. Each file you want to move = one xhr move request to the backend. I assume that’s the reason for the slowness. I’m not aware that we download and upload the file.

As suggested by Simon, you can validate that by using the browser’s network inspector. If the files are really downloaded, that’s a bug to report on GitHub.