Nextcloud taking 6 hours to sync 600,000 files over 900mbs

If I use the google API and just want to get the index of a few 10 000 files in one folder, this request takes a couple of minutes. Not sure if they slow the API down, or if they do more hidden in the background with their client app.

In the past, you could speed up a lot of things by optimizing the db cache sizes (there are tools for it). However, 600 000 files means ~83 files/s, that is not too bad, would be interesting to know where the bottle neck is.

If you have a business case behind it, enterprise subscriptions or if you have own sources can help to improve the client software.

Is sync considered a requirement in your use case? Or is the only reason you’re utilizing it due to Microsoft’s deprecation of their built-in WebDAV client?

The issue here is hundreds or thousands of users. If that is the case, this is already way outside of what we can advise on for community support.

google offers interoperability, let’s say that everything worked fine until the arrival of the 3.10 client, on the other hand renaming a simple folder with content is death.

For now I will continue in GCP with the primarary storage in bucket, for more critical issues, I have added a fixed scalable disk in xfs and published from external extorage this allows to rename folders much faster.

As for the users for now I will stop using the nextcloud client and unfortunately I have to use the raidrive client, I have not found something better.