S3 Utilization Optimization


I just moved from Owncloud to Nextcloud and was using Amazon S3 as a back end. I’ve been watching the utilization, and it’s a lot higher than I would have expected - for my relatively small number of files and data (90MB) there’s a gigantic number of GET requests - over 14M in just the few days I’ve been transferring things over, so it’s looking to be a more expensive than I would have hoped. Even on the PUT side, it’s approaching 500k for around 600 files and directories.

I would have assumed that Nextcloud keeps metadata (size, checksum, etc.) in the local database and would only hit S3 when it actually needs to download a file. Anyone have an experience in actual utilization, and if there’s room for optimization?



Please open an issue on the bugtracker (https://github.com/nextcloud/server/issues), performance improvements can be handled much better there.

This was a problem for a very long time in owncloud. I hadn’t even realized it was corrected.

I think perhaps that is the issue…it is not corrected.

In the docs, it very clearly says that nextcloud should be the only one touching the bucket. In that case, one should assume any update that makes it to S3 goes through the API/system so perhaps only an occasional check to make sure nothing has gone stale is in order (e.g., attributes cache). The S3 module is unusable due to the $$$ associated with trivial hosting, so I had to move back to EBS.

I’ll move this to be a bug and see if I can help. Alas, my coding days ended when Perl was on top of the world so perhaps I’m not the one to write it (although happy to help test).


thanks for reporting this issue.
ref: https://github.com/nextcloud/server/issues/3673

Hi guys,

Any chance this issue is fixed with newer releases? im currently running nextcloud 13, but im still getting the same High API requests with S3 mounted as external storage same with dbchelne. I only have a few files around 10+, but the get requests are already around 1M, put requests at 300k.