I just moved from Owncloud to Nextcloud and was using Amazon S3 as a back end. Iāve been watching the utilization, and itās a lot higher than I would have expected - for my relatively small number of files and data (90MB) thereās a gigantic number of GET requests - over 14M in just the few days Iāve been transferring things over, so itās looking to be a more expensive than I would have hoped. Even on the PUT side, itās approaching 500k for around 600 files and directories.
I would have assumed that Nextcloud keeps metadata (size, checksum, etc.) in the local database and would only hit S3 when it actually needs to download a file. Anyone have an experience in actual utilization, and if thereās room for optimization?
I think perhaps that is the issueā¦it is not corrected.
In the docs, it very clearly says that nextcloud should be the only one touching the bucket. In that case, one should assume any update that makes it to S3 goes through the API/system so perhaps only an occasional check to make sure nothing has gone stale is in order (e.g., attributes cache). The S3 module is unusable due to the $$$ associated with trivial hosting, so I had to move back to EBS.
Iāll move this to be a bug and see if I can help. Alas, my coding days ended when Perl was on top of the world so perhaps Iām not the one to write it (although happy to help test).
Any chance this issue is fixed with newer releases? im currently running nextcloud 13, but im still getting the same High API requests with S3 mounted as external storage same with dbchelne. I only have a few files around 10+, but the get requests are already around 1M, put requests at 300k.