Slow Upload Speed | Docker | Nginx Reverse Proxy

I have been running Nextcloud for quite a while. I have been solving file upload issues by simply using the desktop sync client. But I would love to give another go trying to solve issues with the web version.

I have “Error when assembling chunks, status code 504” for larger files I upload (most of my files are at least 1GB+). After researching I came across an explanation that this is a timeout issue in the php.ini. How may I access this file to change the config?

Second issue is the upload speed being slow. It averages around 600 kB/s or so.

My hardware setup:

  • Raspberry Pi 3 B+
  • 8 TB Western Digital hard drive attached as external storage
  • Raspberry connected to a symmetrical 100 Mb/s up/down fiber connection via ethernet
  • Laptop which is reporting around 200 Mb/s upload speed

My software setup:

  • Docker container with Nextcloud 18.0.1
  • Nginx reverse proxy container directing storage.domain.dev to the Nextcloud container
  • MariaDB Docker container

A possible issue could be the .DEV domain requiring an SSL/TLS certificate and some misconfiguration of that…

Could anyone point out any possible bottlenecks which I may not be aware of?
My assumption is that a 200 Mb/s up to 100 Mb/s down to a USB 2.0 external hard drive would still be faster than 600 kB/s - 1.2 Mb/s despite Raspberry Pi having a shared controller between ethernet and usb.

My apologies if the post does not make much sense (my first one…). I can provide config files and further analysis of the problem.

Thanks for any help!!

you create a file php.ini somewhere in your host filesystem.

# Feel free to add and change any settings you want in here.
upload_max_filesize = 2048M
post_max_size = 2048M
max_execution_time = 200

and map this file into the container by using -v /some/where/on/your/host/php.ini:/usr/local/etc/php/php.ini

assuming you are using the fpm-alpine or fpm image of nextcloud. as well there are some parameter you may have to adjust for nginx. but step-by-step.

sure that it isn’t the web server for static html/css in front of the fpm-php nextcloud image? which docker compose-file did you use?

no ssl cert = no nextcloud. but it’s not = slow nextcloud

did you fine tune mariadb? most common config file assume that mariadb is running alone on a maschine of your size. but in your case the resources are shared with fpm-php, nginx, maybe redis.

is your raspi using swap? did you look at top or htop?

actually I am pulling the image “nextcloud” which I assume gives me the apache version. I’ll have a look at what the difference is and if I’d need to switch to the fpm image

nextcloud:
  image: nextcloud
  restart: always
  container_name: nextcloud
  hostname: nextcloud

And this is what I have in nginx.conf for the reverse proxy:

server {
	listen 443 ssl;
	server_name storage.urbelis.dev;

	client_max_body_size 10G;
	proxy_request_buffering off;

	fastcgi_buffers 64 4K;

	ssl_certificate /etc/letsencrypt/live/urbelis.dev/fullchain.pem;
	ssl_certificate_key /etc/letsencrypt/live/urbelis.dev/privkey.pem;

	#underscores_in_headers on;

	#ssl_stapling on;
	#ssl_stapling_verify on;

	location / {

		proxy_pass http://nextcloud;

		#proxy_headers_hash_max_size 512;
		#proxy_headers_hash_bucket_size 64;
		proxy_set_header Host $host;
		proxy_set_header X-Forwarded-Proto $scheme;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

		#add_header Front-End-Https on;
	}

	location /.well-known/carddav {
		return 301 $scheme://$host/remote.php/dav;
	}

	location /.well-known/caldav {
		return 301 $scheme://$host/remote.php/dav;
	}
}

Yes. And I have increased the swap size to 2 GB from the 100 MB original.

did you try to upload direct to the nextcloud image? without the nginx reverse proxy? just to find out which of the two web server needs tuning?

that prevents the machine from crashing. but won’t improve speed.
if swap is used you have to reduce the amount of ram needed.

Today I tested uploaded a 400 Mb file with proxy_request_buffering set to “on” and off". I’m running a different app behind Nginx, but the results may apply here as well:

  • With request buffering ON, the upload took 36 seconds.
  • With request buffering OFF, the upload took 200 seconds.

That’s about a 7x difference, and in the opposite direction that I expected!

But before you turn proxy_request_buffering back on, I found there was a benefit to turning it off that’s worth considering:

Ideally, large file uploads are restricted to specific URLs, like an route that only authenticated admins and other trusted users can access. In Nginx, client_max_body_size controls that. So let’s say you have an Nginx location block that only allows large uploads on a specific admin-only URL.

That sounds like a good idea, but it only helps if your app can authenticate the request before all the data has been sent to your server in the first place. And that’s the value of turning off buffering: When buffering is turned off, the data immediately starts streaming from Nginx to the app, where the app can check the headers to see if authentication succeeds and immediately stop processing the request if it fails to authenticate.

When proxy_request_buffering is ON, Nginx is going to buffer data up to client_max_body_size before it passes it to the backend before authentication succeeds. So a malicious actor could a large payload to an admin-only route just to force Nginx to buffer all the data. Eventually a a 413 will be returned due to the size, but in the meantime it will be have consumed more resource that if it was authenticated promptly.

1 Like

I am unable to reproduce these results on my NextCloud, though I am using Caddy instead of Nginx.

My understanding of buffering is that it is rather dependant on the app, and since you didn’t test on NextCloud, I don’t think the results have much bearing.