Primary Storage S3, Files large more than 4GB

,

Hi,

I have a server with Nextcloud 16 and a Bucket S3 as primary storage.

When try upload files large more than 4GB nextcloud show error.

In navigator show “error 503”
In desktop app show “file size is unexpected”

In the logs show error:

´´´
“Exception”:“Aws\S3\Exception\S3MultipartUploadException”,“Message”:"An exception occurred while uploading parts to a multipart upload. The following parts had errors:\n- Part 1: Error executing “UploadPart” on “https://XXXXXXXXX/XXXXXXXXX/urn%3Aoid%3A34329?partNumber=1&uploadId=2~8JZkiOiCf91BjQf1cQd-1aN5YJQFCVS”; AWS HTTP error: Client error: PUT https:\/\/XXXXXXXXX\/XXXXXX\/urn%3Aoid%3A34329?partNumber=1&uploadId=2~8JZkiOiCf91BjQf1cQd-1aN5YJQFCVS resulted in a 400 Bad Request response:\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>XAmzContentSHA256Mismatch</Code>XXXXXXXXX< (truncated…)\n XAmzContentSHA256Mismatch (client): - <?xml version=\"1.0\" encoding=\"UTF-8\"?>XAmzContentSHA256Mismatch</Code>XXXXXXXXX</BucketName>
´´´
The bucket S3 is in CEPH, and i can upload files up to 50GB without problems.

I configured in PHP:

  • upload_max_filesize = 32G
  • max_file_uploads = 5000
  • max_execution_time = 7200
  • max_input_time = 7200

Kind regards.

I am seeing the same symptoms on the FPM and FPM-ALPINE 16.04 Docker images. Have tried increasing PHP timeout and memory limit values. Also, the web app shows: Error when assembling chunks, status code 504 similarly to what has been reported here: Error when assembling chunks, status code 504). The worst part is when the upload fails, the chunks that are written are not removed therefore we have a number of large partial uploads without a great way to purge them.

If I switch to local storage, I still get the 504 error but the files at least upload. Would prefer S3 though.

Similar issue has also been reported here: https://github.com/nextcloud/server/issues/7919

did you solve this issue?
nextcloud with s3

size total succ %
1G 10 100
2G 10 100
3G 10 100
4G 10 70
5G 10 0
10G 10

"Part 6: Error executing \"UploadPart\" on \"https:\/\/....\"; 
AWS HTTP error: cURL error 35: error:1408F10B:SSL routines:ssl3_get_record:wrong version number (see http:\/\/curl.haxx.se\/libcurl\/c\/libcurl-errors.html)\n"

No, the problem persists.

There are options to fix this on ceph. It is a ceph issue, not a nextcloud issue.

Well, it’s probably both. Point is you can fix it by modifying your ceph setup. It’s easy to do, took me about 45 minutes of actual testing to verify things and about 20 minutes of research.

Ceph working fine ok.
I am upload files more up than 30 GB without problems con aws-cli.
The problem is only into nextcloud, in the module conector to S3.

It’s not though.

I had the same issue with ceph’s radosgw. Made some changes. Problem fixed. 4GB limitation gone, can upload 20GB+ to nextcloud now via webgui.

Edit: This also fixed a lot of other problems such as the previewgenerator app not working correctly and erroring out with S3/Ceph.

Edit2: And while you can argue that this is a compatibility issue with nextcloud I’ve done some independent research into the matter and see other people (not using nextcloud) are having similar issues with multipart uploads on amazon failing with 3GB-5GB uploads.

following works for me:

  1. use http instead of https
  2. decrease S3_upload_part_size (in nextcloud)
  3. decrease DEFAULT_CONCURRENCY (in aws sdk)

Nobody follow this advice as it sends the s3 key+secret over plaintext.

2 Likes

I could resolve this issue as well by patching /lib/private/Files/ObjectStore/S3ObjectTrait.php and lowering S3_UPLOAD_PART_SIZE from 512MB to 256MB.

It might make sense to make this a configuration option? I guess the optimal size here is different depending on the S3 backend used? (e.g., I’m not using AWS S3 but DigitalOcean Spaces)

Hi Stjosh,
Could you please explain how you patched this file? Perhaps paste the file contents so that I can compare and do the same here in my end ? Thanks!

For anyone who still struggles with this problem, just as I did an hour ago, I found a solution which I hadn’t found in a complete form anywhere else. So I shared it here: