[details]
Running Nextcloud 29.0.2 in docker container
I have external S3 comptible storage configured for my user (ie. Not configured as global storage).
Today, my photos folder stopped showing any content. It showed content just OK yesterday, prior I uploaded more content.
I uploaded 11991 files in 101 folders.
Prior the upload, there was already 38512 files in 349 folders, which I could access just fine.
I did the occ files:scan after the upload and just re-did it to be sure.
Starting scan for user 1 out of 1 (jcom)
+---------+-------+-----+---------+---------+--------+--------------+
| Folders | Files | New | Updated | Removed | Errors | Elapsed time |
+---------+-------+-----+---------+---------+--------+--------------+
| 450 | 50503 | 0 | 0 | 0 | 0 | 00:01:59 |
+---------+-------+-----+---------+---------+--------+--------------+
Yet, can’t see any of the files via nextcloud.
Files on the S3, which are in different branch are visible.
Ie. All Files > myextS3 > Music
shows files just fine, but
All FIles > myextS3 > Photos
No files, no folders.
Not sure if related to this issue, but only errors I see in log are
“No provider found for id files”[/details]
I’ll be testing if update to 29.0.3 will help.
Any other ideas what to test?
Well, issue is with a 60 second timeout being triggered as nextcloud accesses the directory. Not sure if it’s issue with nextcloud’s code or just performance issue with the S3 storage provider. Tried with another S3 storage provider and the timeout was not triggered.
So the root cause is the long time accessing the directory, but I tried to workaround that by setting nginx timeouts, but was not successfull.
I’m running nextcloud in container: nextcloud:29.0.2-apache with jwilder/nginx-proxy:1.5.1-alpine, and I was not successfull increasing the timeouts, though I tried setting
in the nginx conf. For some reason, those settings didn’t seem to affect the timeout, but that is something to discuss in other forum.
But as a summary: As the other S3 provider worked fine, I din’t go too deep in debugging, but the magical number of files in a folder seemed to be about 500. Ie. Below 500 and results came in fast (less than 10 secs). When a folder had some 550 or so images, the 60sec timeouit was triggered.
But then decided not to waste time on that and just changed my stuff from IBM S3 to backblaze → problem solved.
My environment is a docker image nextcloud:29.0.4-apache but at the time of that issue, maybe some fractions older (29.0.2 perhaps). If you are interested in more information, I can do some digging/testing for you. Please change recipient (or add) nc-community@ali.patanen.com. this gmail doesn’t seem to allow me to do it.