[Fixed] Unable to List Files in A AWS S3 External Storage Folder

Support intro

Sorry to hear you’re facing problems :slightly_frowning_face:

help.nextcloud.com is for home/non-enterprise users. If you’re running a business, paid support can be accessed via portal.nextcloud.com where we can ensure your business keeps running smoothly.

In order to help you as quickly as possible, before clicking Create Topic please provide as much of the below as you can. Feel free to use a pastebin service for logs, otherwise either indent short log examples with four spaces:


Or for longer, use three backticks above and below the code snippet:


Some or all of the below information will be requested if it isn’t supplied; for fastest response please provide as much as you can :heart:

Nextcloud version (eg, 18.0.2): 19.0.2
Operating system and version (eg, Ubuntu 20.04): Ubuntu 18.04 (But in a Docker container)
Apache or nginx version (eg, Apache 2.4.25): nginx/1.19.2 (Docker), nginx/1.14.0 (Host)
PHP version (eg, 7.1): PHP 7.4.9

The issue you are facing:

I found this issue after I upgrade my Nextcloud instance from v18.0.2 to v19.0.2.

I mounted a AWS S3 bucket onto /s3 as an external storage, and there is a folder /s3/photos containing about 200 subdirectories. After I upgraded the Nextcloud, I can’t access to this folder any more. Everytime I try to list the subdirectories in it via WebDav (/remote.php/dav), I got a 504 Gateway Timeout error.

This issue doen’t occur on other folders like /s3/another/folder, and I can even still access to its subfolders by directly using the path (e.g.: /s3/photos/2019).

I tried to change the nginx configuration by adding:

fastcgi_ buffer s 512 64K;
client_max_body_size 10G;

But it didn’t work.

Is this the first time you’ve seen this error? (Y/N): Y

Steps to replicate it:

  1. Install Docker via this docker-compose.yml
  2. Mount a S3 external storage
  3. Create about 200 directories(or files) in it.

The output of your Nextcloud log in Admin > Logging:

Couldn't find related records.

The output of your config.php file in /path/to/nextcloud (make sure you remove any identifiable information!):

$CONFIG = array (
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'apps_paths' => 
  array (
    0 => 
    array (
      'path' => '/var/www/html/apps',
      'url' => '/apps',
      'writable' => false,
    1 => 
    array (
      'path' => '/var/www/html/custom_apps',
      'url' => '/custom_apps',
      'writable' => true,
  'instanceid' => 'ID',
  'passwordsalt' => 'SALT',
  'secret' => 'SECRET',
  'trusted_domains' => 
  array (
    0 => 'localhost:8001',
    1 => 'example.com',
  'datadirectory' => '/var/www/html/data',
  'dbtype' => 'mysql',
  'version' => '',
  'overwritehost' => 'example.com',
  'overwriteprotocol' => 'https',
  'overwrite.cli.url' => 'https://example.com',
  'dbname' => 'nextcloud',
  'dbhost' => 'db',
  'dbport' => '',
  'filelocking.enabled' => false,
  'dbtableprefix' => 'oc_',
  'mysql.utf8mb4' => true,
  'dbuser' => 'nextcloud',
  'dbpassword' => 'nextcloud',
  'installed' => true,
  'maintenance' => false,
  'loglevel' => 2,

The output of your Apache/nginx/system log in /var/log/____: - - [06/Sep/2020:23:34:59 +0800] "PROPFIND /remote.php/dav/files/admin/s3/photos HTTP/1.1" 499 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36"
2020/09/06 23:34:59 [error] 11251#11251: *7645 upstream timed out (110: Connection timed out) while reading response header from upstream, client: xxx.xx.xx.xx, server: example.com, request: "PROPFIND /remote.php/dav/files/admin/s3/photos HTTP/1.1", upstream: "", host: "example.com"

I am having similar issue, after upgrading from 18.0.4 to 19. I am unable to open the mounted S3 bucket, says “Directory unavailable”. I tried re-adding it back, it’s doing the same thing. But I also see this under logging,

Error PHP Allowed memory size of 536870912 bytes exhausted (tried to allocate 9437184 bytes) at /opt/nextcloud/lib/private/Files/View.php#1468

Maybe more memory would help?

I feel like my issue is because Nextcloud tries to fetch some information of every single child of the directory.
I can feel that the time it takes is proportional to the number of subfolders, and it is not likely to happen if nextcloud is just using ListObjects of S3.

I found the problem. Even after increasing the memory, I wasn’t able to open the S3 bucket via Nextcloud. But ended up finding that S3 server access log created more than 1.5 lakhs of logs file & the nextcloud ended up not able to load all the files. Currently working on it!

1 Like

Been fixed in 20.0 :smiley: