Long time to open directories in external storage s3

Nextcloud version (eg, 29.0.5): Nextcloud Hub 8 (29.0.6)
Operating system and version (eg, Ubuntu 24.04): Linux cloud 5.10.0-13-amd64 #1 SMP Debian 5.10.106-1 (2022-03-17) x86_64
Apache or nginx version (eg, Apache 2.4.25):
PHP version (eg, 8.3):

My docker compose yml
 version: '3'

volumes:
  nextcloud:
  database:

services:
  database:
    image: postgres:alpine
    restart: always
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    ports:
      - 5432:5432
  
  redis:
    image: redis:alpine
    restart: always
    ports:
      - 6379:6379

  nextcloud:
    image: nextcloud:apache
    restart: unless-stopped
    ports:
      - 8081:80
    links:
      - database
      - redis
    depends_on:
      - database
      - redis
    volumes:
      - nextcloud:/var/www/html
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_HOST: database
      DB_PORT: ${POSTGRES_PORT}
      REDIS_HOST: redis
      NEXTCLOUD_TRUSTED_DOMAINS: ${NEXTCLOUD_HOSTNAME}
      OVERWRITECLIURL: ${NEXTCLOUD_URL}
      OVERWRITEPROTOCOL: https
      OVERWRITEHOST: ${NEXTCLOUD_HOSTNAME}
      NEXTCLOUD_ADMIN_USER: ${NEXTCLOUD_ADMIN_USERNAME}
      NEXTCLOUD_ADMIN_PASSWORD: ${NEXTCLOUD_ADMIN_PASSWORD}
      APACHE_BODY_LIMIT: 6000000000
      PHP_MEMORY_LIMIT: 16G
      PHP_UPLOAD_LIMIT: 16G
      POST_MAX_SIZE: 16G
      MAX_INPUT_TIME: 3600
      MAX_EXECUTION_TIME: 3600

php config
<?php
$CONFIG = array (
  'htaccess.RewriteBase' => '/',
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'memcache.distributed' => '\OC\Memcache\Redis',
  'memcache.locking' => '\OC\Memcache\Redis',
  'redis' => [
      'host' => 'redis',
      'port' => 6379,
  ],
  'apps_paths' => 
  array (
    0 => 
    array (
      'path' => '/var/www/html/apps',
      'url' => '/apps',
      'writable' => false,
    ),
    1 => 
    array (
      'path' => '/var/www/html/custom_apps',
      'url' => '/custom_apps',
      'writable' => true,
    ),
  ),
  'overwritehost' => '............ok..........',
  'overwriteprotocol' => 'https',
  'overwrite.cli.url' => 'https://............ok..........',
  'upgrade.disable-web' => true,
  'passwordsalt' => '............ok.........',
  'secret' => '............ok.........',
  'trusted_domains' => 
  array (
    0 => 'localhost',
    1 => '............ok..........',
  ),
  'datadirectory' => '/var/www/html/data',
  'tempdirectory' => '/var/www/html/data/tmp',
  'dbtype' => 'pgsql',
  'version' => '29.0.6.1',
  'dbname' => '.......ok....',
  'dbhost' => 'database',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'dbuser' => 'oc_admin',
  'dbpassword' => '.......ok....',
  'installed' => true,
  'instanceid' => 'ocrsxdaxbilb',
  'skeletondirectory' => '',
  'app_install_overwrite' => 
  array (
    0 => 'wopi',
  ),
  'maintenance_window_start' => 1,
);
nginx config on server
server {
	server_name ............;
	client_max_body_size 16G;
	
	location / {
			proxy_pass http://localhost:8081;
			proxy_set_header Host $host;
			proxy_set_header X-Real-IP $remote_addr;
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
			proxy_set_header X-Forwarded-Proto $scheme;
			add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always; 
			proxy_read_timeout 300s;

	}
	location ^~ /.well-known {
			location = /.well-known/carddav { return 301 /remote.php/dav/; }
			location = /.well-known/caldav  { return 301 /remote.php/dav/; }
			location /.well-known/acme-challenge	{ try_files $uri $uri/ =404; }
			location /.well-known/pki-validation	{ try_files $uri $uri/ =404; }
			return 301 /index.php$request_uri;
	}

    listen 443 ssl; # managed by Certbot
....

The issue you are facing:

It takes a very long time to open directories in external storage s3. A directory with 500+ folders inside opens in about 2 minutes. If I use normal storage then the speed is fast.

Could something be configured incorrectly?

cron works, redis seems to work too

I have the same performance issues in version 30.

Even folders that only have about 10 items inside them are extremely slow.

However - if you view a public link created in the same location, the navigation is fast.

The navigation is also lightning fast if I use rclone on the same machine to connect to the s3 bucket.

However - I can’t rely on that as when Nextcloud loses the local rclone mount is thinks all the files are deleted and removes them from your sync and nukes all the links you’ve made!

1 Like

Thanks for the message! It’s nice to know I’m not alone :slight_smile: And the idea with the share function works. It saved my life a little! :slight_smile:
Also, using the Windows client helped me out, since synchronization does not occur every time the folder is opened.

Of course, I would like to wait for some other options to make the web client work.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.