Experiencing unwanted 1Gb upload hard limit, unable to find the culprit

Having problems uploading files larger than 1 gigabyte, and problems finding where the culprit is.

It appears to be a hard limit. Files 995mb works no problem, but anything past 1000mb gets uploaded 15-65% and then interrupts. No exceptions. This happens both in web interface and through Webdav, and I have not installed any NC Apps since I was able to upload larger files than 1gb before.

In my main php.ini I got the following set:

max_input_time = 3600
max_execution_time = 3600
upload_max_filesize = 2048M
post_max_size = 2048M

I can’t see anything in NC’s config.php that I feel might be relevant.

I’m using Ubuntu Server 22.10, a fresh new self installed NC, and very basic installation, using Apache2, including APCu file/cache handler. I do have ClamAV installed, and calls it as executable from within NC.
Server got plenty of disk space, 16gb RAM etc.

The errorlog in NC reveals the following (logged the same time the file was being uploaded/interrupted):

[no app in context] Error: Sabre\DAV\Exception\BadRequest: Expected filesize of 1321583883 bytes but read (from Nextcloud client) and wrote (to Nextcloud storage) 0 bytes. Could either be a network problem on the sending side or a problem writing to the storage on the server side. at <>

  1. /var/www/nextcloud/apps/dav/lib/Connector/Sabre/Directory.php line 151
    OCA\DAV\Connector\Sabre\File->put()
  2. /var/www/nextcloud/3rdparty/sabre/dav/lib/DAV/Server.php line 1098
    OCA\DAV\Connector\Sabre\Directory->createFile()
  3. /var/www/nextcloud/3rdparty/sabre/dav/lib/DAV/CorePlugin.php line 504
    Sabre\DAV\Server->createFile()
  4. /var/www/nextcloud/3rdparty/sabre/event/lib/WildcardEmitterTrait.php line 89
    Sabre\DAV\CorePlugin->httpPut()
  5. /var/www/nextcloud/3rdparty/sabre/dav/lib/DAV/Server.php line 472
    Sabre\DAV\Server->emit()
  6. /var/www/nextcloud/3rdparty/sabre/dav/lib/DAV/Server.php line 253
    Sabre\DAV\Server->invokeMethod()
  7. /var/www/nextcloud/3rdparty/sabre/dav/lib/DAV/Server.php line 321
    Sabre\DAV\Server->start()
  8. /var/www/nextcloud/apps/dav/lib/Server.php line 360
    Sabre\DAV\Server->exec()
  9. /var/www/nextcloud/apps/dav/appinfo/v2/remote.php line 35
    OCA\DAV\Server->exec()
  10. /var/www/nextcloud/remote.php line 171
    require_once(“/var/www/nextcl … p”)

PUT /remote.php/dav/files/admin/Uploadtest/Testfile.rar
from 185.195.xxx.xxx by admin at 2023-03-06T16:10:30+00:00

I also tried an improvised thing, at a later stage than when the above log was generated, by adding

upload_tmp_dir = /var/big_temp_file/

… in php.ini, and created “big_temp_file” folder in /var/. Result of that, was I got unending error logs saying NC can’t read/write that folder. This is not the origin of the initial problem, but I would also be interested in which user needs read/write to that folder in order to utilize it? Would it be “www-data”? I’m assuming I will be able to set those rights using Chown.

Thank you for any help

please take a look at this docs. there are number of discussions here as well “large files upload” is good search term.

:thinking: I have now re-installed the whole system (including operative system), and I really did it by the book this time, by the numbers exactly. Not one error has occurred.
On top of that, I have increased the upload values in php.ini and still using APCu. Not touched anything else in Nextcloud, nothing at all. And I have really left no stone unturned, that I’m aware of - including local network config etc.

And STILL there is a hard ceiling of 1000mb uploads, right out of the gate!
Still get the exact same errormessage in the NC log.

This, added to the fact that lots of people have described this problem here in the forum, for several years, but barely anyone - that I have read on the forum - has reported they reached a solution to it … this makes me call bug on this problem. And from what I’ve read on the forum, I am far from the first one to do so.

If anyone cares to chime in, are you able to upload larger files than 1Gb to your NC? And if so, would you care to share- or compare settings? Preferably if you’re on Apache2 and using more simple setup than a large complex one.

Thank you.

Yes, I am able to upload files bigger than 1GB. I am using AIO.

Basically this is the best document: Uploading big files > 512MB — Nextcloud latest Administration Manual latest documentation You’l need to adjust each and every value mentioned there in your own config of all web servers, proxies and php processes in the chain.

Also make sure that you are not using some service that might limit things to a certain amount like e.g. cloudflare which limits uploads/chunks to 100MB. See GitHub - nextcloud/all-in-one: The official Nextcloud installation method. Provides easy deployment and maintenance with most features included in this one Nextcloud instance. for notes on cloudflare.

1 Like

Thanks for writing, Szaimen. :+1:
I remember being able to make larger uploads when using NC as snap. But never as self installed. I chose to install myself, since snap seemed to limit the possibilities of adjusting things too much for me, forcing me into weird, unconventional and uncertain workarounds etc. I didn’t try the AIO. I’m not sure how ‘boxed in’ I will get from using it, compared to manual installation.

How does AIO rank in terms of tweakability, in between snap and manual installation?

As for the NC manual, I was working that thing for 17 hours straight yesterday, trying all different settings and variations even improvisations I could think of. Using different browsers, different filetypes, having friends try uploading either straight or via VPN/TOR, switching between APCu and Redis with or without file.locking, trying with and without local router, different router settings etc etc etc.

No matter what I could think of, I always got exactly the same error-message in the log, that I do now on an ultra-fresh install:

[no app in context] Error: Sabre\DAV\Exception\BadRequest: Expected filesize of 1231491541 bytes but read (from Nextcloud client) and wrote (to Nextcloud storage) 0 bytes. Could either be a network problem on the sending side or a problem writing to the storage on the server side.

It never changed no matter what I did, which is part of why I feel this could be a bug. But if it was, Snap or Dockers would suffer this bug too right? Aaaah … confusion.

I am not using cloudflare in any of my chains. I am however using No-IP.com for domain name (I’m on a dynamic IP). But I was on the same domain when I tried NC as snap module and was able to upload more than 1gb. So my first gut feel is that the domain re-direction shouldn’t be a problem. I will send them an e-mail and ask anyway, since I’m out of options now, as far as my knowledge and experience goes.

Sorry if I sound a bit dry. My hope is running a bit low :worried:

Yep. Reporting back that I tried both snap and AIO installation now. Re-installed the OS in between each.
Uploading 9gb files no problems at all, no config needed even.
So ought to be bad tuning/settings, almost right out of the box, some kind of gravel in the system. Or potentially Ubuntu server 22.10 is somehow different in a way that Nextcloud haven’t updated for yet.

1 Like

Hey Ange,

I can confirm that 1GB+ uploads work fine on Ubuntu 22 LXC and Debian 11 LXC, I run both. With PHP 8.1 and Nginx.

One piece of advice I can give you is make sure your NGINX/PHP TMP folder has enough space, I had an issue awhile back where my uploads where terminating early because the drive where the cache folder was located was running out of space due to log files on the same drive.

In my scenario I was running a small Root Container (16GB) with another mounter container for NC storage (1TB), but as I discovered php caches files to the root container first and then saves them to the proper location.

This actually brings me to another issue I had that caused the same problem due to owner permissions on the php cache folder, once I fixed the permissions everything worked properly. PHP seems to cache uploads in memory to a certain point then dump the memory to disk cache, then once the file is done compiles the cached bits in to a file in the proper location.

Hope this helps!

1 Like

Yes. All the activities of the server are done by its user www-data.
So you will have to chown www-data.www-data /var/big_temp_file or (to make it world read/writeable as any temp folder) chmod 777 /var/big_temp_file

How are you trying to upload? Web-Browser, WebDAV, Desktop Client?
Do you have multiple php-versions installed?
Php-Module or php-fpm?

The largest File I have uploaded was about 60 GB, 2 weeks ago. No problem.
Nextcloud “pure” on barremetal Ubuntu Server with php8.0-fpm.

I can confirm (from my point of view) that one is more flexible when free from containers but it requires a little bit more skills here and there, to get the most out of it.

You also have to like it somehow when things don’t work, to make them run. :wink:

Ah, I understand about the chmod rights now. Thx

I’ve been trying to upload both through web interface logged in (as admin), also logged out uploading via upload enabled folder. Webdav as well. I’m uploading via the domain-namn, not via ‘localhost’ or 192.168.x.x etc. Through the nextcloud sync client there is no problem however. Also asked friends to upload (via domain name), they experienced the same thing (1gb ceiling).

I was able to set Ubuntu to consider php8.1 as default, and I don’t have PHP-FPM installed.

Unfortunately I hate it when things don’t work and I have to make them run :stuck_out_tongue:

OK. Here is a list of things I would test when this would affect me:

Apache config: Make sure that no LimitRequestBody is set.

my php.ini (excerpt):

max_execution_time = 3600
max_input_time = -1
memory_limit = 1G
post_max_size = 0
file_uploads = On
upload_tmp_dir = "/tmp_upload"
upload_max_filesize = 16G
max_file_uploads = 200

I have redis enabled, if you haven’t, think again. Brings benefits.

Did you try to switch clamav of, to see if it is the cause of the limitation (timeout)?

Did you adjust max_chunk_size to your network speed?
Default is 10485760 (10 MiB).
If you have a fast line, you can set a larger chunksize to optimize.
I have a 1 Gigabit fiber connection and have set it to 50 MB:

./occ config:app:set files max_chunk_size --value 52428800

Switch logging to verbose on apache, on nextcloud etc.

Open logs in terminal windows with tail -F $logfile and follow it “live”

Use apropriate analizing tools:
iftop shows the network traffic live
top and atop show the system resources in realtime

etc. etc. etc.

2 Likes

LimitRequestBody 2000000000 does it for me.
Also I make sure /tmp is big enough.

alpine:~# cat /etc/apache2/conf.d/nextcloud.conf 
Alias /nextcloud "/usr/share/webapps/nextcloud"

<Directory /usr/share/webapps/nextcloud>
  LimitRequestBody 2000000000
  Require all granted
  AllowOverride All
  Options FollowSymLinks MultiViews

  <IfModule mod_dav.c>
    Dav off
  </IfModule>
</Directory>
alpine:~#

LimitRequestBody ain’t do nothing but restrict. It specifies the number of bytes that are allowed in a request body. A value of 0 (or absence of the directive) means unlimited.

The best is to NOT set any LimitRequestBody at all.

I opened a bug report as some of my users are having the same problem which could be solved by setting LimitRequestBody to unlimited:

1 Like