Several issues with uploading large files (80+ MiB) to S3 primary storage

Nextcloud version: 24.0.4; 25 beta 2; latest nightly (same issue in all of 'em)
OS: Rocky Linux 8.6
Server: nginx 1.14.1
PHP version: 7.4.19

We are using Nextcloud with Wasabi (S3 API) as the backend and are having three related issues.

Issue #1: We cannot upload large files, e.g. 80 MiB, because of a 500 server error, which is caused by PHP running out of memory. This happens after the file has been uploaded in 10 MiB chunks to the bucket and it tries to assemble the chunks into one file.

We are currently using a PHP memory limit of 128M, which is under the recommended minimum (our budget is tight). However, we have tried using a limit of 512M and had the same issue (although I think we were able to upload somewhat larger files); we feel that 512 MiB really ought to be enough.

Issue #2: We get “double-charged” for files uploaded this way. Wasabi has a 90-day storage minimum, and that minimum takes effect whether you delete the file immediately or not. So if you upload a 1 GiB file, it will first upload 1 GiB in 10 MiB chunks, and then it will upload the assembled 1 GiB file, for a total of 2 GiB uploaded. For the next three months we’ll be charged for 2 GiB storage even though we’re only storing 1 GiB.

Issue #3: When the upload fails due to the 500 server error, Nextcloud won’t clean up the 10 MiB chunks from the Wasabi bucket. I am unsure whether there is some kind of “housekeeping” function that will eventually notice and remove the unused chunks.

These could all be circumvented by not using chunked uploads, but my understanding is we then need as much free RAM (plus some extra) as the largest file we wish to upload.

We have observed that these problems only exist with object storage. When using local block storage, everything works fine. However, that’s not a viable solution for us since large amounts of block storage are much more expensive.

This feature looks like it might help solve all three problems: https://github.com/nextcloud/server/pull/27034

…however, it does not appear to be based on a particular stable or beta release, and in any case the feature does not appear to work for us; the upload fails silently as soon as the upload hits the 10 MiB mark.

Steps to replicate:

  1. Install Nextcloud, but don’t create admin user yet
  2. Edit config.php to add the 'objectstore' section quoted below
  3. Create the admin user
  4. Try to upload a large file. I test progressively with an 11 MiB file, an 80 MiB file, a 305 MiB file, and finally a 1 GiB file. One of these will likely fail, though which one probably depends on PHP’s memory_limit. With a limit of 128M, the 80 MiB one often fails, and I believe the 305 MiB one always does.

Upon trying to upload a file that’s too large, Admin > Logging shows an error like this:

[PHP] Error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 69208278 bytes) at /srv/www/nextcloud/3rdparty/guzzlehttp/psr7/src/Stream.php#247

MOVE /remote.php/dav/uploads/admin/web-file-upload-f0fd8f75ce46bbd9dfd0d9e09f92b9a5-1662196935455/.file
from <redacted> by admin at 2022-09-03T09:24:50+00:00

And a corresponding error appears in nginx’s error.log:

2022/09/03 09:24:50 [error] 409378#0: *17763286 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 69208278 bytes) in /srv/www/nextcloud/3rdparty/guzzlehttp/psr7/src/Stream.php on line 247" while reading response header from upstream, client: <redacted>, server: <redacted>.com, request: "MOVE /remote.php/dav/uploads/admin/web-file-upload-f0fd8f75ce46bbd9dfd0d9e09f92b9a5-1662196935455/.file HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm/www.sock:", host: "<redacted>.com"
2022/09/03 09:24:50 [error] 409378#0: *17763286 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Uncaught Error: Class 'OCP\Files\Cache\CacheEntryUpdatedEvent' not found in /srv/www/nextcloud/lib/private/Files/Cache/Cache.php:418
Stack trace:
#0 /srv/www/nextcloud/lib/private/Files/ObjectStore/ObjectStoreStorage.php(487): OC\Files\Cache\Cache->update()
#1 [internal function]: OC\Files\ObjectStore\ObjectStoreStorage->OC\Files\ObjectStore\{closure}()
#2 /srv/www/nextcloud/3rdparty/icewind/streams/src/CountWrapper.php(100): call_user_func()
#3 [internal function]: Icewind\Streams\CountWrapper->stream_close()
#4 {main}
  thrown in /srv/www/nextcloud/lib/private/Files/Cache/Cache.php on line 418" while reading upstream, client: <redacted>, server: <redacted>.com, request: "MOVE /remote.php/dav/uploads/admin/web-file-upload-f0fd8f75ce46bbd9dfd0d9e09f92b9a5-1662196935455/.file HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm/www.sock:", host: "<redacted>.com"

Finally, here is our config.php:

<?php
$CONFIG = array (
  'instanceid' => 'oc45o7fxfuo9',
  'objectstore' => 
  array (
    'class' => '\\OC\\Files\\ObjectStore\\S3',
    'arguments' => 
    array (
      'bucket' => '<redacted>',
      'autocreate' => false,
      'key' => '<redacted>',
      'secret' => '<redacted>',
      'hostname' => 's3.us-east-2.wasabisys.com',
      'port' => 443,
      'use_ssl' => true,
      'region' => 'us-east-2',
      'use_path_style' => false,
    ),
  ),
  'passwordsalt' => '<redacted>',
  'secret' => '<redacted>',
  'trusted_domains' => 
  array (
    0 => '<redacted>.com',
  ),
  'datadirectory' => '/srv/www/nextcloud/data',
  'dbtype' => 'mysql',
  'version' => '24.0.5.0',
  'overwrite.cli.url' => '<redacted>.com',
  'dbname' => 'nextcloud',
  'dbhost' => 'localhost',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'mysql.utf8mb4' => true,
  'dbuser' => 'nextcloud',
  'dbpassword' => '<redacted>',
  'installed' => true,
);

I’m trying to understand why other users are not having the problem we’re having. Do you not use S3-compatible object storage? Do you use it, but as secondary instead of primary storage? Do you use a higher memory limit than 512M? Do you not store large files on your server?

To be quite honest, I’m a little baffled that this issue appears to be so rare. What are you guys doing differently than what we’re doing?

1 Like

I am also having this same issue, I’m trying to sync large files and it used to work, but now it just doesn’t, I don’t know what has changed, I have a similar setup, S3 as primary store, so you’re not alone but I hope we get an answer soon. Will update if I find a solution to the big files issue, but am not looking into the others.

Perhaps an earlier version of Nextcloud didn’t have the issue? When did it start happening for you?

The first time I have record of a large file failing was on 8/14 before then I had no issues, so it might have been a regression, I didn’t think much of it though as it was a file drop and not the sync client failing, I did eventually just raise the php memory limit and that is working for now, but it’s not a good solution as I still am limited on file size, but I don’t know how big.

I’ve tried Nextcloud 23.0.8, which doesn’t have the bug, and Nextcloud 24.0.3, which does. So I suspect the bug appeared in the transition from Nextcloud 23 to Nextcloud 24, though it might have appeared in one of the early revisions of 24.

Since we’re making a fresh install, we will probably just use Nextcloud 23 until Nextcloud 25 has integrated the patch for S3 MultipartUpload, which should fix our issues.

Dunno anything about Amabeep S3, but have you had a look at your php.ini?
There you’ll find something like:
upload_max_filesize = 200M
This also can be found in .htaccess-files.

find /path/to/ -name “.htaccess” -print0|xargs -0 grep upload_max_filesize

#crystalball

The upload_max_filesize isn’t the issue. It uploads to the bucket in 10 MB chunks and all the chunks get uploaded fine. The error is when it tries to assemble those chunks into the final file, because it appears to try to allocate a very large block of RAM to do it. For us, at least, that block is too big and so PHP crashes with an out-of-memory error.

1 Like

Issue still exists in 25 beta 6.

I have been experiencing the same issue. Trying to get the Backup app working and have yet to have a successful backup in the month since I first installed it. I couldn’t get Nextcloud->Nextcloud uploads working in the Backup app due to issues with Nextcloud’s implementation of file uploading and cURL (yet to be fixed). I figured I’d turn to using Backblaze B2 as a storage location via the external storage feature, but the upload via both occ and the web UI don’t stop eating up my system’s RAM, refusing to clear the RAM of chunks once they’ve been uploaded. This results in only 4GB to be uploaded before the PHP process is killed by the kernel. Fortunately, I only run Nextcloud on my device so no other processes suffer, but I imagine this would be a major problem for servers with 8GB of RAM or more that run multiple services.

Edit: Maybe I’ll try switching my storage location again. Perhaps an NFS share mounted locally from the remote server through Wireguard would suffice. Performance seems to be much faster than the SMB and WebDAV-based (I.e., Nextcloud external storage option) protocols.

Any fix about this problem or issue is still there?

Unfortunately I’ve given up on both Nextcloud and object storage, so I don’t know.

I everyone,
I solved this issue disabling IPv6 at OS level.

I hope this helps.

I’m just randomly discovering the issue you had with Wasabi S3 and Nexcloud.

I have the same issue here. I tried to fix it as I thought is was an issue related to large file support. I posted a ticket but maybe my description was not clear enough and I didn’t get any answer:

So far it’s not fixed for me.

Does the IPv6 trick work for any of you?
@furrykef: what did you implemented as a solution?

I still like the idea of S3 storage you can expand without limitation.

Thank you