NextCloud wont work properly. Gateway Timeout on huge files and poor performance

Hi folks,

I have a issue with NextCloud im using Nginx with php7.4-fpm and MariaDB-. When uploading files over 2GB i get gateway time and downloading files max of 28GBs fails. Storage is going NFS while VM is running on iSCSI Pool on XCP-ng.

I have done the tweaking that people recommended but still no joy i also have a problem with more than one person downloading or uploading i get gateway timeout i am using HaProxy on pfSense with http to https offloading.

Here is the config,

contents of nextcloud.conf

server {
listen 80;
listen [::]:80;
server_name cloud.;

# Add headers to serve security related headers
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
add_header Referrer-Policy no-referrer;

#I found this header is needed on Ubuntu, but not on Arch Linux. 
add_header X-Frame-Options "SAMEORIGIN";

# Path to the root of your installation
root /usr/share/nginx/nextcloud/;

access_log /var/log/nginx/nextcloud.access;
error_log /var/log/nginx/nextcloud.error;

location = /robots.txt {
    allow all;
    log_not_found off;
    access_log off;
}

# The following 2 rules are only needed for the user_webfinger app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
#rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json
# last;

location = /.well-known/carddav {
    return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
   return 301 $scheme://$host/remote.php/dav;
}

location ~ /.well-known/acme-challenge {
  allow all;
}

# set max upload size
client_max_body_size 100G;
fastcgi_buffers 64 4K;

# Disable gzip to avoid the removal of the ETag header
gzip off;

# Uncomment if your server is build with the ngx_pagespeed module
# This module is currently not supported.
#pagespeed off;

error_page 403 /core/templates/403.php;
error_page 404 /core/templates/404.php; 

location / {
   rewrite ^ /index.php;
}

location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
   deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
   deny all;
 }

location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+|core/templates/40[34])\.php(?:$|/) {
   include fastcgi_params;
   fastcgi_split_path_info ^(.+\.php)(/.*)$;
   try_files $fastcgi_script_name =404;
   fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
   fastcgi_param PATH_INFO $fastcgi_path_info;
   #Avoid sending the security headers twice
   fastcgi_param modHeadersAvailable true;
   fastcgi_param front_controller_active true;
   fastcgi_pass unix:/run/php/php7.4-fpm.sock;
   fastcgi_intercept_errors on;
   fastcgi_request_buffering off;
   fastcgi_connect_timeout 60;
   fastcgi_send_timeout 3600;
   fastcgi_read_timeout 3600;
}

location ~ ^/(?:updater|ocs-provider)(?:$|/) {
   try_files $uri/ =404;
 index index.php;
}

# Adding the cache control header for js and css files
# Make sure it is BELOW the PHP block
location ~* \.(?:css|js)$ {
    try_files $uri /index.php$uri$is_args$args;
    add_header Cache-Control "public, max-age=7200";
    # Add headers to serve security related headers (It is intended to
    # have those duplicated to the ones above)
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Robots-Tag none;
    add_header X-Download-Options noopen;
    add_header X-Permitted-Cross-Domain-Policies none;
    add_header Referrer-Policy no-referrer;
    # Optional: Don't log access to assets
    access_log off;

}

location ~* .(?:svg|gif|png|html|ttf|woff|ico|jpg|jpeg)$ {
try_files $uri /index.php$uri$is_args$args;
# Optional: Don’t log access to other assets
access_log off;
}
}
what ive added to php.ini

;;;;;;;;;;;;;;;;;;
; Resource Limits ;
;;;;;;;;;;;;;;;;;;;

; Maximum execution time of each script, in seconds
; http://php.net/max-execution-time
; Note: This directive is hardcoded to 0 for the CLI SAPI
max_execution_time = 3600
max_input_time = 3600
max_input_vars = 1000
memory_limit = 2048M

Any ideas?

Thanks.

Update issue seems to be with Nginx. Nginx just eats memory which causes it too crash.

I’ve experienced problems with a large number of files of varying sizes, when it’s all being uploaded all at once with Nextcloud. Yes, this is a problem with Nextcloud.

I thought Nextcloud could handle anything, until I was helping someone setup their own cloud, then they told me their files weren’t uploading. After messing with it and researching, I recommended that this person use Seafile, which I helped to set up. The problem with Nextcloud is that it cannot handle large number of files, especially of varying sizes, unless you also have a cache server running to assist it.

I’ve heard that Redis is a good cache server for Nextcloud, and I’ve read online from some people who’ve used both Seafile and Nextcloud with Redis that once Redis is added in, the file transfers are great for Nextcloud. But I myself haven’t yet set up a Redis server so I can’t really comment much on it. I’d prefer Nextcloud because of all the features. Seafile is pretty much just cloud file storage, but with exceptional performance out of the box. So if Redis really does speed up Nextcloud, with all the apps available for it, I think Nextcloud is by far the best free cloud server out there.

If you are to use Seafile, make sure to not use the web browser for uploading lots of files, only use the sync client. But from what I’ve read and personally experienced, Seafile, just the basic server, is very fast with lots of files and of varying sizes.

Whichever server system you choose will depend on what you’d like to accomplish.

Is this a 32 bit system?

Hi tflidd it’s on a Virtualization Server it’s 64bit with 32gb of RAM.

I found the problem it was HaProxy timeout values had to be changed I changed it to 10m which fixed the 504 Gateway Timeout problem. But when downloading files that are over 10GB the memory gets eaten up I have 8GB shared to the VM this happens when there is speeds of 1000mbps or faster.

Thanks.

Update,

Problem is solved by changing timeout in HaProxy and tweaking PHP. Memory Exhaustion happens when the connection is over 1gbp in internal Network. My workstation is on a 2G LAGG.

Thanks.

1 Like

@violetdragon Hi, can you describe which parameter you changed? I am troubleshooting why NC client canot sync folders with huige amount of siles and subfolders. It seems that the client is running in an “Network Job Timeout” so I try to find the parameter which is responsible for that.

Hi, is it a 504 Gateway Timeout error you are having? Are you using Haproxy or Nginx Reverse Proxy?

No reverse proxy, only Apache itself. But I also get timeout problems when connecting with the NC 3.2.0 client and syncing huge folders. So I am looking into what could be the reason.

Can you attach screenshot on the Timeout error, do you get this on Processing Files or when uploading Files? i highly recommend pointing a Web Server through a Reverse Proxy.

Hello … I am using a proxy reverse and the same thing happens to me. I have a virtual machine in Proxmox and a NextCloud installed. For several versions it takes a long time to load the page and sometimes it gives me a 504 timeout … can you help me?

Hi,

If you are using Nginx. Add fast_cgi timeout to the server block of Nginx and set it to 300.

Can you explain how that is done? thanks

Hi,

Can you attach your Server Block ? you haven’t mentioned if you are using Nginx or not ? what is your configuration ?

I am using Nginx as a reverse proxy.

Please attach your Server block of the backend Server.

server {
listen 80;
server_name miserver.com;
rewrite ^ https://miserver.com permanent;
}

server {
listen 443 ssl;
server_name miserver.com;

access_log  /var/log/nginx/miserver.com.log;
error_log   /var/log/nginx/miserver.com.errors.log;
ssl_certificate /etc/letsencrypt/live/miserver.com-0001/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/miserver.com-0001/privkey.pem; # managed by Certbot
ssl_protocols             TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache         shared:SSL:1m;
ssl_session_timeout       10m;
ssl_ciphers               HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

location / {
    proxy_pass https://192.168.1.200:443;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-Proto $scheme;
}

}

So if you just run it over Port 80 does the same happen? test using Port 80 first, are you using Haproxy or Nginx Reverse Proxy on a separate VM?