Nextcloud version: 24.0.4; 25 beta 2; latest nightly (same issue in all of 'em)
OS: Rocky Linux 8.6
Server: nginx 1.14.1
PHP version: 7.4.19
We are using Nextcloud with Wasabi (S3 API) as the backend and are having three related issues.
Issue #1: We cannot upload large files, e.g. 80 MiB, because of a 500 server error, which is caused by PHP running out of memory. This happens after the file has been uploaded in 10 MiB chunks to the bucket and it tries to assemble the chunks into one file.
We are currently using a PHP memory limit of 128M, which is under the recommended minimum (our budget is tight). However, we have tried using a limit of 512M and had the same issue (although I think we were able to upload somewhat larger files); we feel that 512 MiB really ought to be enough.
Issue #2: We get “double-charged” for files uploaded this way. Wasabi has a 90-day storage minimum, and that minimum takes effect whether you delete the file immediately or not. So if you upload a 1 GiB file, it will first upload 1 GiB in 10 MiB chunks, and then it will upload the assembled 1 GiB file, for a total of 2 GiB uploaded. For the next three months we’ll be charged for 2 GiB storage even though we’re only storing 1 GiB.
Issue #3: When the upload fails due to the 500 server error, Nextcloud won’t clean up the 10 MiB chunks from the Wasabi bucket. I am unsure whether there is some kind of “housekeeping” function that will eventually notice and remove the unused chunks.
These could all be circumvented by not using chunked uploads, but my understanding is we then need as much free RAM (plus some extra) as the largest file we wish to upload.
We have observed that these problems only exist with object storage. When using local block storage, everything works fine. However, that’s not a viable solution for us since large amounts of block storage are much more expensive.
This feature looks like it might help solve all three problems: https://github.com/nextcloud/server/pull/27034
…however, it does not appear to be based on a particular stable or beta release, and in any case the feature does not appear to work for us; the upload fails silently as soon as the upload hits the 10 MiB mark.
Steps to replicate:
- Install Nextcloud, but don’t create admin user yet
- Edit config.php to add the
'objectstore'
section quoted below - Create the admin user
- Try to upload a large file. I test progressively with an 11 MiB file, an 80 MiB file, a 305 MiB file, and finally a 1 GiB file. One of these will likely fail, though which one probably depends on PHP’s
memory_limit
. With a limit of 128M, the 80 MiB one often fails, and I believe the 305 MiB one always does.
Upon trying to upload a file that’s too large, Admin > Logging shows an error like this:
[PHP] Error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 69208278 bytes) at /srv/www/nextcloud/3rdparty/guzzlehttp/psr7/src/Stream.php#247
MOVE /remote.php/dav/uploads/admin/web-file-upload-f0fd8f75ce46bbd9dfd0d9e09f92b9a5-1662196935455/.file
from <redacted> by admin at 2022-09-03T09:24:50+00:00
And a corresponding error appears in nginx’s error.log:
2022/09/03 09:24:50 [error] 409378#0: *17763286 FastCGI sent in stderr: "PHP message: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 69208278 bytes) in /srv/www/nextcloud/3rdparty/guzzlehttp/psr7/src/Stream.php on line 247" while reading response header from upstream, client: <redacted>, server: <redacted>.com, request: "MOVE /remote.php/dav/uploads/admin/web-file-upload-f0fd8f75ce46bbd9dfd0d9e09f92b9a5-1662196935455/.file HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm/www.sock:", host: "<redacted>.com"
2022/09/03 09:24:50 [error] 409378#0: *17763286 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught Error: Class 'OCP\Files\Cache\CacheEntryUpdatedEvent' not found in /srv/www/nextcloud/lib/private/Files/Cache/Cache.php:418
Stack trace:
#0 /srv/www/nextcloud/lib/private/Files/ObjectStore/ObjectStoreStorage.php(487): OC\Files\Cache\Cache->update()
#1 [internal function]: OC\Files\ObjectStore\ObjectStoreStorage->OC\Files\ObjectStore\{closure}()
#2 /srv/www/nextcloud/3rdparty/icewind/streams/src/CountWrapper.php(100): call_user_func()
#3 [internal function]: Icewind\Streams\CountWrapper->stream_close()
#4 {main}
thrown in /srv/www/nextcloud/lib/private/Files/Cache/Cache.php on line 418" while reading upstream, client: <redacted>, server: <redacted>.com, request: "MOVE /remote.php/dav/uploads/admin/web-file-upload-f0fd8f75ce46bbd9dfd0d9e09f92b9a5-1662196935455/.file HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm/www.sock:", host: "<redacted>.com"
Finally, here is our config.php:
<?php
$CONFIG = array (
'instanceid' => 'oc45o7fxfuo9',
'objectstore' =>
array (
'class' => '\\OC\\Files\\ObjectStore\\S3',
'arguments' =>
array (
'bucket' => '<redacted>',
'autocreate' => false,
'key' => '<redacted>',
'secret' => '<redacted>',
'hostname' => 's3.us-east-2.wasabisys.com',
'port' => 443,
'use_ssl' => true,
'region' => 'us-east-2',
'use_path_style' => false,
),
),
'passwordsalt' => '<redacted>',
'secret' => '<redacted>',
'trusted_domains' =>
array (
0 => '<redacted>.com',
),
'datadirectory' => '/srv/www/nextcloud/data',
'dbtype' => 'mysql',
'version' => '24.0.5.0',
'overwrite.cli.url' => '<redacted>.com',
'dbname' => 'nextcloud',
'dbhost' => 'localhost',
'dbport' => '',
'dbtableprefix' => 'oc_',
'mysql.utf8mb4' => true,
'dbuser' => 'nextcloud',
'dbpassword' => '<redacted>',
'installed' => true,
);