S3 random storage problem on large files

Hi,
i am using the lastest docker image for nextcloud together with S3. I am running into random 400 BadRequest Errors when the files are large… If we are talking about files between 3GB and 5GB, i do have a success rate of maybe 20%. So i have to rerun the upload a few times but finaly the upload is successfull.

Anyways… in 80% the Log shows me an “BadRequest” 400 Timeout. I have researched due the whole internet and dont find any solution. Maybe this has to do something with a curl timeout or a S3 parameter. I also dont find anything in the code. Please help me with this… Thx

Environment:
Latest (03.03.20) Docker Nextcloud Image 18.0.1 with Apache

Setup:
<?php
$CONFIG = array (
‘htaccess.RewriteBase’ => ‘/’,
‘memcache.local’ => ‘\OC\Memcache\APCu’,
‘apps_paths’ =>
array (
0 =>
array (
‘path’ => ‘/var/www/html/apps’,
‘url’ => ‘/apps’,
‘writable’ => false,
),
1 =>
array (
‘path’ => ‘/var/www/html/custom_apps’,
‘url’ => ‘/custom_apps’,
‘writable’ => true,
),
),
‘instanceid’ => ‘REMOVED’,
‘passwordsalt’ => ‘REMOVED’,
‘secret’ => ‘REMOVED+O0zjhS8eG0pFbO2’,
‘trusted_domains’ =>
array (
0 => ‘REMOVED’
),
‘datadirectory’ => ‘/var/www/html/data’,
‘tempdirectory’ => ‘/tmp’,
‘dbtype’ => ‘mysql’,
‘version’ => ‘18.0.1.3’,
‘overwrite.cli.url’ => ‘REMOVED’,
‘dbname’ => ‘nextcloud’,
‘dbhost’ => ‘REMOVED’,
‘dbport’ => ‘’,
‘dbtableprefix’ => ‘oc_’,
‘mysql.utf8mb4’ => true,
‘dbuser’ => ‘REMOVED’,
‘dbpassword’ => ‘REMOVED’,
‘installed’ => true,
‘objectstore’ =>
array (
‘class’ => ‘\OC\Files\ObjectStore\S3’,
‘arguments’ =>
array (
‘bucket’ => ‘s3.ceetrox.de’,
‘autocreate’ => true,
‘key’ => ‘REMOVED’,
‘secret’ => ‘REMOVED’,
‘hostname’ => ‘s3.eu-central-1.amazonaws.com’,
‘port’ => 443,
‘use_ssl’ => true,
‘region’ => ‘eu-central-1’,
),
),
‘app_install_overwrite’ =>
array (
0 => ‘uploaddetails’,
),
‘ldapIgnoreNamingRules’ => false,
‘ldapProviderFactory’ => ‘OCA\User_LDAP\LDAPProviderFactory’,
);

Log
Aws\S3\Exception\S3MultipartUploadException: An exception occurred while uploading parts to a multipart upload. The following parts had errors: - Part 4: Error executing “UploadPart” on “https://s3.eu-central-1.amazonaws.com/s3.ceetrox.de/urn%3Aoid%3A112540?partNumber=4&uploadId=15q9YdrBOkelJcaifcPoS_k.3FTKQmfMSwUImQ.o6zo09h2jIeFxJRAhS_.SUo5hlRGJEkIN_j_81zS2wQKEEd5v8AcMhsVjpd6gVeI1Fdhk.gansp6VHgpYDUxdYCf1”; AWS HTTP error: Client error: PUT https://s3.eu-central-1.amazonaws.com/s3.ceetrox.de/urn%3Aoid%3A112540?partNumber=4&uploadId=15q9YdrBOkelJcaifcPoS_k.3FTKQmfMSwUImQ.o6zo09h2jIeFxJRAhS_.SUo5hlRGJEkIN_j_81zS2wQKEEd5v8AcMhsVjpd6gVeI1Fdhk.gansp6VHgpYDUxdYCf1 resulted in a 400 Bad Request response: <?xml version="1.0" encoding="UTF-8"?> RequestTimeoutYour socket connection to the server w (truncated…) RequestTimeout (client): Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. - <?xml version="1.0" encoding="UTF-8"?> RequestTimeoutYour socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.A4E6E02FD8951BB6smDJwoIp/06ckg1ojYVuv5f5Rli+kuTzcBe7+kntLBGR506eaRhxGRijIIuJ7kT5d2zp3kk0ZIk= - Part 5: Error executing “UploadPart” on “https://s3.eu-central-1.amazonaws.com/s3.ceetrox.de/urn%3Aoid%3A112540?partNumber=5&uploadId=15q9YdrBOkelJcaifcPoS_k.3FTKQmfMSwUImQ.o6zo09h2jIeFxJRAhS_.SUo5hlRGJEkIN_j_81zS2wQKEEd5v8AcMhsVjpd6gVeI1Fdhk.gansp6VHgpYDUxdYCf1”; AWS HTTP error: Client error: PUT https://s3.eu-central-1.amazonaws.com/s3.ceetrox.de/urn%3Aoid%3A112540?partNumber=5&uploadId=15q9YdrBOkelJcaifcPoS_k.3FTKQmfMSwUImQ.o6zo09h2jIeFxJRAhS_.SUo5hlRGJEkIN_j_81zS2wQKEEd5v8AcMhsVjpd6gVeI1Fdhk.gansp6VHgpYDUxdYCf1 resulted in a 400 Bad Request response: <?xml version="1.0" encoding="UTF-8"?> RequestTimeoutYour socket connection to the server w (truncated…) RequestTimeout (client): Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. - <?xml version="1.0" encoding="UTF-8"?> RequestTimeoutYour socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.62A6CA44A98FC9F2O0nKqsjdOtb5oiYPzO7OaNk5eFoLH1D89ZNhEifFfVdmSDk5/E52AFV7b/SSaBMWdLuAPfVYxqo=

Got the same error… fixed by modifying /lib/private/Files/ObjectStore/S3ObjectTrait.php and lowering S3_UPLOAD_PART_SIZE from 500MB to 250MB.

Give it a try…

When are you going to fix it?
In 2021 I can’t change the part file size parameter from 500 to 250MB ?!
I cannot upload files from my EC2 to bucket in nextcloud environments. I have the same error. It is pathetic that you have set this to send a single file. It’s an error!

I had the same problem, but with my Nextcloud version (23) I had to change the uploadPartSize in a different file.

I modified:
/var/www/nextcloud/lib/private/Files/ObjectStore/S3ConnectionTrait.php

And changed the numerical value in this line:
$this->uploadPartSize = $params['uploadPartSize'] ?? 104857600;

Here I set the value to 104857600 which is 100MB. (250MB did not solve it for me)

I got the inspiration to set it to 100MB thanks to this comment: S3 default upload part_size set to 500MB · Issue #24390 · nextcloud/server · GitHub

In line with the last link, I also modified the file:
/var/www/nextcloud/3rdparty/aws/aws-sdk-php/src/S3/MultipartUploader.php

With the following settings:

    const PART_MIN_SIZE = 4294967296;
    const PART_MAX_SIZE = 1048576000000;
    const PART_MAX_NUM = 10000;

As you can see/calculate, PART_MIN_SIZE is set it to 4GB and PART_MAX_SIZE is set to roughly 1TB (0.95TB) which was determined by 100MB*10000.

2 Likes

Thank you!
I had to switch from the snap installation to the regular one in order to make this modification.
That solved my upload problem! :slight_smile:

1 Like

A silly question (I hope)… I’m trying to replicate your solution on a brand-new install of Nextcloud, via the “AIO” docker set of containers.

How do I apply your fix to /var/www/nextcloud<…> within the docker container?

Asking for two reasons:

  1. There sure are a lot of containers, not sure which one to exec -it into…
  2. Wouldn’t any changes I make be overwritten with each upgrade (e.g.; new container created)

Hoping someone familiar with the new AIO/Docker paradigm can help here.

Oh, I should probably be clear… this/OP’s problem of chunked uploads failing to S3 object storage (in my case, Linode/Akamai) is still a problem as of September 2023.

After crawling through the docker containers (Nextcloud AIO…), we found the following which may help with this problem of a 500 Server Error when merging the chunked files after an upload to an S3 External Store…

In nextcloud/aio-nextcloud:
/usr/local/etc/php/conf.d/nextcloud.ini

memory_limit=${PHP_MEMORY_LIMIT}           
upload_max_filesize=${PHP_UPLOAD_LIMIT}    
post_max_size=${PHP_UPLOAD_LIMIT}          
max_execution_time=${PHP_MAX_TIME}         
max_input_time=${PHP_MAX_TIME

Two notes:

  1. We would be overriding environmental parameters. That feels like generally bad practice for long-term production use? So, how do we change wherever these are being read from?

  2. Our hope is that increasing PHP_MEMORY_LIMIT would allow the use of the larger default chunks.