I have been running Nextcloud for quite a while. I have been solving file upload issues by simply using the desktop sync client. But I would love to give another go trying to solve issues with the web version.
I have “Error when assembling chunks, status code 504” for larger files I upload (most of my files are at least 1GB+). After researching I came across an explanation that this is a timeout issue in the php.ini. How may I access this file to change the config?
Second issue is the upload speed being slow. It averages around 600 kB/s or so.
My hardware setup:
Raspberry Pi 3 B+
8 TB Western Digital hard drive attached as external storage
Raspberry connected to a symmetrical 100 Mb/s up/down fiber connection via ethernet
Laptop which is reporting around 200 Mb/s upload speed
My software setup:
Docker container with Nextcloud 18.0.1
Nginx reverse proxy container directing storage.domain.dev to the Nextcloud container
MariaDB Docker container
A possible issue could be the .DEV domain requiring an SSL/TLS certificate and some misconfiguration of that…
Could anyone point out any possible bottlenecks which I may not be aware of?
My assumption is that a 200 Mb/s up to 100 Mb/s down to a USB 2.0 external hard drive would still be faster than 600 kB/s - 1.2 Mb/s despite Raspberry Pi having a shared controller between ethernet and usb.
My apologies if the post does not make much sense (my first one…). I can provide config files and further analysis of the problem.
Today I tested uploaded a 400 Mb file with proxy_request_buffering set to “on” and off". I’m running a different app behind Nginx, but the results may apply here as well:
With request buffering ON, the upload took 36 seconds.
With request buffering OFF, the upload took 200 seconds.
That’s about a 7x difference, and in the opposite direction that I expected!
But before you turn proxy_request_buffering back on, I found there was a benefit to turning it off that’s worth considering:
Ideally, large file uploads are restricted to specific URLs, like an route that only authenticated admins and other trusted users can access. In Nginx, client_max_body_size controls that. So let’s say you have an Nginx location block that only allows large uploads on a specific admin-only URL.
That sounds like a good idea, but it only helps if your app can authenticate the request before all the data has been sent to your server in the first place. And that’s the value of turning off buffering: When buffering is turned off, the data immediately starts streaming from Nginx to the app, where the app can check the headers to see if authentication succeeds and immediately stop processing the request if it fails to authenticate.
When proxy_request_buffering is ON, Nginx is going to buffer data up to client_max_body_size before it passes it to the backend before authentication succeeds. So a malicious actor could a large payload to an admin-only route just to force Nginx to buffer all the data. Eventually a a 413 will be returned due to the size, but in the meantime it will be have consumed more resource that if it was authenticated promptly.