Nextcloud web client fails when trying to upload files over 2gb

I’m having an issue with Nextcloud processing large files, say larger than 2gb.

I’m using the latest official docker images from nextcloud. I’m using postgres as the backend via the postgres official docker container, nginx as a reverse proxy via the nginx official docker container and reds as a cache via the official redis docker container.

Everything works splendidly, except for processing large files.

When uploading large files, it seems to complete, certainly most if not all of the chunks uploaded, the problem seems to be in recombining and then moving them. It’s not a space issue on any temp directory or the host os, I have terabytes free. It’s not a memory thing because i always have at least 2gb free.

This is a sample from the docker container logs for my nextcloud container after the issue happens:

1.1.1.1 - - [28/May/2024:17:01:52 +0000] "MOVE /remote.php/dav/uploads/zipadmin/web-file-upload-30e6b46e1f067eb1/.file HTTP/1.0" 503 793 "-" "Mozilla/5.0 (Windows NT 10.0; rv:125.0) Gecko/20100101 Firefox/125.0"
1.1.1.1 - - [28/May/2024:17:02:53 +0000] "DELETE /remote.php/dav/uploads/zipadmin/web-file-upload-30e6b46e1f067eb1 HTTP/1.0" 204 569 "-" "Mozilla/5.0 (Windows NT 10.0; rv:125.0) Gecko/20100101 Firefox/125.0"

I set my nextcloud log level to 0 for debug and started monitoring my nextcloud log file. There were quite a few errors, but this stood out to me:

{"reqId":"pARobN2XbMSNgmaZJ8wS","level":0,"time":"2024-05-28T17:02:53+00:00","remoteAddr":"1.1.1.1","user":"zipadmin","app":"core","method":"MOVE","url":"/remote.php/dav/uploads/zipadmin/web-file-upload-30e6b46e1f067eb1/.file","message":"!!! Path 'uploads/web-file-upload-30e6b46e1f067eb1/70' is not accessible or present !!!","userAgent":"Mozilla/5.0 (Windows NT 10.0; rv:125.0) Gecko/20100101 Firefox/125.0","version":"29.0.0.19","data":{"app":"core"}}

It looks like one of the chunks wasn’t there and so the whole file couldn’t be reassembled, and so the entire thing just failed instead of trying to resume or anything.

I am aware of this bug, but the details of that big seem to indicate it’s a problem with chunks past the 200th, where this seems to have happened with chunk 70. This behavior is consistent for me, uploading large files through the web client fails, and since I’m trying to setup a file server for people to share large video files for a film we are working on, this makes it unusable for me.

Is there anything I can do to avoid this problem happening again? Is this a different type of bug to what I linked above?

What’s your PHP max upload limit settings?

I read the documentation on uploading large files and changed every setting that I could.

I had trouble changing the php configuration directly in the docker image, as changing the php-production.ini and php-development.ini (names may be slightly different can’t check right now) had no effect, nor did creating a php.ini in that directory to add settings. The settings would take affect, and phpinfo would show the ini file I created as the loaded config file, but that was it.

Instead, I am changing the php values via .htaccess in the docker image (and phpinfo) shows they reflect as changed.

The settings in my .htaccess at the moment are:

php_value  memory_limit  2056M
php_value  upload_max_filesize  56G
php_value  post_max_size  56G
php_value  php_memory_limit   2056M

php_value  php_upload_limit  56G
php_value max_input_time 7200
php_value max_execution_time 7200

best to change them as variables (–env) command line parameter when you do docker run and/or use docker compose .yaml template to specify them there directly.

technically speaking, you should be able to use docker run -v /your/custom/local/php.ini:/your/custom/config/file/php.ini imagename

but for some reason the folks who built AIO recommend to use the --env, so I assume there must be a reason for it, could be some dependencies, or keeping it easier - technically I can’t see a reason why not to give the container a custom php.ini via -v mounts.

Note: changing the php.ini directly in the container will most likely not work - that change is not persistent.
Note 2: keep in mind changing the values in .htaccess may do the change for when served via the web engine (apache2/nginx etc.), but if the Nextcloud AIO in container uses php-fpm, those params needs to be changed in the php-fpm’s php.ini for it to have effect on the Nextcloud itself.

In a nutshell: try the --env at docker run runtime, or docker run -v to supply the container your custom-made php.ini (make sure you use the one for php-fpm, not for cli or apache, otherwise it won’t work).