2GB Download Limit Nextcloud - Letsenrcypt - Unraid

I´m using Nextcloud on my Unraid Server proxied by letsencrypt with my own domain/ subdomain setup.

Runs fine, but when I try to download files bigger then 2gb from my cloud I always get a network error and the download stops at 2GB.

I found a few threads in the net but none of it provided a solution for my problem. If there´s anyone around here who can help, what´s the necessary info I need to provide for troubleshooting?

Having exactly the same problem on open mediavault.
the I think I have traced the error back to the nginx log of the let’s encrypt docker.

2020/02/26 22:08:28 [error] 390#390: 736 upstream prematurely closed connection while reading upstream, client: XXX.XXX.XXX.XXX, server: XXXXX., request: "GET /s/[FILE]

I don’t know if this part of the nextcloud.subdomain.conf is responsible:

location / {
include /config/nginx/proxy.conf;
resolver valid=30s;
set $upstream_nextcloud nextcloud;
proxy_max_temp_file_size 2048m;
proxy_pass https://$upstream_nextcloud:443;

proxy_max_temp_file_size 2048m; this in particular.

Any insights?

1 Like

I tried editing the nextcloud.subdomain.conf and it did the trick for me, a file of ± 3gb downloaded without a problem via public sharing link in chrome.

what I did:
go to your let’s encrypt config folder i.e. config\Letsencrypt\nginx\proxy-confs

look there for the nextcloud.subdomain.conf. edit it with notepad++ or something.

change proxy_max_temp_file_size 2048m;


proxy_max_temp_file_size 8192m;

8192 is just a multitude of 2048 but I think 10000 could do as well.

I think the only thing you need to keep in mind is that this temp file will be written to your docker image. So the disk containing this image must be big enough to handle the file. I am not entirely sure but this is what I could make of it after an evening testing and reading up.

Hope this helps you out.


I found the bigger temp file was causing trouble for my smaller SSD hosting the OS and the dockers.


-v /YOUR/LOCATION:/var/lib/nginx/tmp

as a possible solution for mapping the temp folder to your host. Didn’t try it yet though.

Awesome thank you. It works for me as well. I put it to 16384.

Just be sure that you have 16384 mb free space on your OS disk. You should be fine that way, else you can try to compose your lets encrypt docker with:

-v /YOUR/LOCATION:/var/lib/nginx/tmp

Glad it worked for you!

Have you tried to turn it off?
proxy_max_temp_file_size 0;

Next thing I will try is:

proxy_max_temp_file_size 0;

for now my OS which hosts the docker image only has 9GB of free space left. If someone would download a file bigger then 9GB it would swallow all space left. I did rebuild the docker mapping
/var/lib/nginx/tmp to a new folder on my filesystem. This seems to work for me so now I have 80GB free for the temporary file to use. (no 80GB files on my system but by downloading a lot of files at the same time you could hit this)

I’ll report back if I find something is off…

client_max_body_size 0;
location / {
include /config/nginx/proxy.conf;
resolver valid=30s;
set $upstream_nextcloud nextcloud;
proxy_max_temp_file_size 2040m; made this proxy_max_temp_file_size 0 ;

proxy_pass https://$upstream_nextcloud:443;

With this setting in place the buffer file when sharing a link can get as big as needed. (keep in mind that this will ‘grow’ your docker image so you need enough space on the disk where docker runs or you need to map the temp folder of the image to another drive)

This was actually the only thing I needed to change to get bigger files (movies, series, etc) to download, So thanks MeiRos!

I still have a problem though:
Whenever I transfer a big file from “SMB/CIFS A” to “SMB/CIFS B” I get a timeout in the android app or a could not move FILENAME when trying via Web interface.
The transfer does complete eventually though so most things seem to be working.

Letsencrypt/NGINX error log:

2020/03/03 10:05:33 [error] 392#392: *1878 FastCGI sent in stderr: “Primary script unknown” while reading response header from upstream, client: XXX.XXX.XXX.XXX, server: _, request: “GET ///user/recordings.php HTTP/1.1”, upstream: “fastcgi://”, host: “XXX.XXX.XXX.XXX”

2020/03/03 11:18:52 [error] 392#392: *3369 FastCGI sent in stderr: “Primary script unknown” while reading response header from upstream, client:, server: _, request: “GET /index.php?s=/Index/\think\app/invokefunction&function=call_user_func_array&vars[0]=md5&vars[1][]=HelloThinkPHP HTTP/1.1”, upstream: “fastcgi://”, host: “xxx.xxx.xxx.xxx”, referrer: “http://xxx.xxx.xxx.xxx:80/index.php?s=/Index/\think\app/invokefunction&function=call_user_func_array&vars[0]=md5&vars[1][]=HelloThinkPHP

Nexcloud/NGINX error log:
2020/03/03 11:55:20 [error] 352#352: *4854 upstream timed out (110: Operation timed out) while reading response header from upstream, client: XXX.XXX.XXX.XXX, server: _, request: “MOVE /remote.php/dav/files/Colin/Discovery/Downloads/21%20Jump%20Street%202012%20BluRay.720p.DTS.x264-CHD HTTP/1.1”, upstream: “fastcgi://”, host: “host.hosting.host”

Any clue as to what I might have missed so I can fix this as well?

Kind regards,

For anybody having this issue and coming here for a fix, I managed to get it working eventually:

It turned out to be a fastCGI timeout in LetsEncrypt or Nextcloud, I am not really sure which one because I edited both nginx.conf files to allow for a bigger fastcgi timeout.

nginx.conf http section

http {

Basic Settings

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 3600;
types_hash_max_size 2048;

# Timeout tryouts
#proxy_connect_timeout 600;
#proxy_send_timeout 600;
#proxy_read_timeout 600;
#send_timeout 600;
fastcgi_read_timeout 3600;
fastcgi_send_timeout 3600;
#fastcgi_connect_timeout 300;
#client_header_timeout 300;
#client_body_timeout 300;

I put the bold fastcgi settings in the conf file and the timeouts after 1 minute disappeared. Mabey there is a better way, I hope someone hovers over and can point it out to me.
For me this seems to work at least.

1 Like