Problem with external S3 storage - Folder empty

[details]
Running Nextcloud 29.0.2 in docker container
I have external S3 comptible storage configured for my user (ie. Not configured as global storage).
Today, my photos folder stopped showing any content. It showed content just OK yesterday, prior I uploaded more content.
I uploaded 11991 files in 101 folders.

Prior the upload, there was already 38512 files in 349 folders, which I could access just fine.

I did the occ files:scan after the upload and just re-did it to be sure.

Starting scan for user 1 out of 1 (jcom)
+---------+-------+-----+---------+---------+--------+--------------+
| Folders | Files | New | Updated | Removed | Errors | Elapsed time |
+---------+-------+-----+---------+---------+--------+--------------+
| 450     | 50503 | 0   | 0       | 0       | 0      | 00:01:59     |
+---------+-------+-----+---------+---------+--------+--------------+

Yet, can’t see any of the files via nextcloud.
Files on the S3, which are in different branch are visible.
Ie. All Files > myextS3 > Music
shows files just fine, but
All FIles > myextS3 > Photos
No files, no folders.

Not sure if related to this issue, but only errors I see in log are
“No provider found for id files”[/details]

I’ll be testing if update to 29.0.3 will help.
Any other ideas what to test?

Well, issue is with a 60 second timeout being triggered as nextcloud accesses the directory. Not sure if it’s issue with nextcloud’s code or just performance issue with the S3 storage provider. Tried with another S3 storage provider and the timeout was not triggered.

So the root cause is the long time accessing the directory, but I tried to workaround that by setting nginx timeouts, but was not successfull.

I’m running nextcloud in container: nextcloud:29.0.2-apache with jwilder/nginx-proxy:1.5.1-alpine, and I was not successfull increasing the timeouts, though I tried setting

fastcgi_read_timeout 600s;
fastcgi_send_timeout 600s;
fastcgi_connect_timeout 600s;
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
send_timeout 600s;

in the nginx conf. For some reason, those settings didn’t seem to affect the timeout, but that is something to discuss in other forum.

But as a summary: As the other S3 provider worked fine, I din’t go too deep in debugging, but the magical number of files in a folder seemed to be about 500. Ie. Below 500 and results came in fast (less than 10 secs). When a folder had some 550 or so images, the 60sec timeouit was triggered.

It does sound like a provider matter. 60s is a long time to retrieve a listing of only ~500 objects.

What were the log entries?

Sorry for late reply.

After some digging around, found the timeout was not nextcloud, but nginx. Logentries were nothing more than:

nginx.1 | 2024/06/26 14:07:22 [error] 55#55: *5 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 84.248.64.244, server: [mycloud.patanen.com](http://mycloud.patanen.com), request: "PROPFIND /remote.php/dav/files/[jani@patanen.com/IBM-Cloud/Photos/](http://jani@patanen.com/IBM-Cloud/Photos/) HTTP/2.0", upstream: "[http://192.168.0.5:80/remote.php/dav/files/jani@patanen.com/IBM-Cloud/Photos/](http://192.168.0.5/remote.php/dav/files/jani@patanen.com/IBM-CompanyPaid/Photos/)", host: "[mycloud.patanen.com](http://mycloud.patanen.com)"

I did try playing around with nginx settings

fastcgi_read_timeout 600; fastcgi_send_timeout 600; fastcgi_connect_timeout 600; proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600;

But then decided not to waste time on that and just changed my stuff from IBM S3 to backblaze → problem solved.

My environment is a docker image nextcloud:29.0.4-apache but at the time of that issue, maybe some fractions older (29.0.2 perhaps). If you are interested in more information, I can do some digging/testing for you. Please change recipient (or add) nc-community@ali.patanen.com. this gmail doesn’t seem to allow me to do it.

This topic was automatically closed 8 days after the last reply. New replies are no longer allowed.