S3 Primary Object Store Large Uploads (2GB+) Fail with Exception and TimeOut Using webDAV or NextCloud Client or Web UI

Support intro

Sorry to hear you’re facing problems :slightly_frowning_face:

help.nextcloud.com is for home/non-enterprise users. If you’re running a business, paid support can be accessed via portal.nextcloud.com where we can ensure your business keeps running smoothly.

In order to help you as quickly as possible, before clicking Create Topic please provide as much of the below as you can. Feel free to use a pastebin service for logs, otherwise either indent short log examples with four spaces:

example

Or for longer, use three backticks above and below the code snippet:

longer
example
here

Some or all of the below information will be requested if it isn’t supplied; for fastest response please provide as much as you can :heart:

Nextcloud version (eg, 12.0.2): 17.0.2
Operating system and version (eg, Ubuntu 17.04): Ubuntu 18.04 LTD
Apache or nginx version (eg, Apache 2.4.25): Apache 2
PHP version (eg, 7.1): 7.2

The issue you are facing:

When uploading large files via Cyberduck WebDav, Windows / MacOS WebDAV, Nextcloud Client, or the Web Interface, the following happens:
With WebDav > It uploads a file / then restarts the upload or completely timesout partial way.
With Nextcloud Client > It uploads to 100% and gets stuck at 0 seconds.
With Web UI > It uploads to 100% and then throws a Exception error about the wrong size.

Is this the first time you’ve seen this error? (Y/N): Y

Steps to replicate it:

  1. Setup S3 Primary or External Storage
  2. Upload large file via WebUI / WebDav or Nextcloud Client

The output of your Nextcloud log in Admin > Logging:

Sabre\DAV\Exception: An exception occurred while uploading parts to a multipart upload. The following parts had errors: - Part 2: Error executing "UploadPart" on "https://server32113123.s3.us-east-1.amazonaws.com/urn%3Aoid%3A5232?partNumber=2&uploadId=AWbri3n6QoRtojFjskE8Ze9zORR8KSMwvv5UQphEsYIzkrqnKd.EX6pOi0kGGK_pFLfYkZab7iehIA3xRi9b5BVgorxBcU4CcolILg9Iw3aQIWQ1Q8D4iO0vSUTb9yOJ"; AWS HTTP error: Client error: `PUT https://server32113123.s3.us-east-1.amazonaws.com/urn%3Aoid%3A5232?partNumber=2&uploadId=AWbri3n6QoRtojFjskE8Ze9zORR8KSMwvv5UQphEsYIzkrqnKd.EX6pOi0kGGK_pFLfYkZab7iehIA3xRi9b5BVgorxBcU4CcolILg9Iw3aQIWQ1Q8D4iO0vSUTb9yOJ` resulted in a `400 Bad Request` response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>RequestTimeout</Code><Message>Your socket connection to the server w (truncated...) RequestTimeout (client): Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. - <?xml version="1.0" encoding="UTF-8"?> <Error><Code>RequestTimeout</Code><Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message><RequestId>BE8141ACAF59BFF1</RequestId><HostId>TuWxrANb6UjQHgYYqJQd6MB3Wct/a2uik7QuxR/Xz62+tzPwaT8QrbE1KpnEvK3G7R9T3JlZugk=</HostId></Error>

Error continues for additional Parts




The output of your config.php file in /path/to/nextcloud (make sure you remove any identifiable information!):

PASTE HERE

The output of your Apache/nginx/system log in /var/log/____:

PASTE HERE

Sabre\DAV\Exception: An exception occurred while uploading parts to a multipart upload.

Provided your system is setup correctly and the nextcloud server has enough free space to buffer the upload you have nothing to worry about.

If you wait a while you’ll see the “.part” file eventually becomes a regular file in your file manager.

It’s been several hours the .parts never assemble to a full file in aws s3 bucket.

You’re likely running into the infamous amazon S3 issue then. Large files on amazon S3 tend to fail. Not just for nextcloud, happens on other apps such as rclone.

If you are using this in a enterprise setting it would be best to reach out to both amazon and nextcloud directly.

This is with a ceph S3 backend:

I was able to resolve this issue as well by modifying /lib/private/Files/ObjectStore/S3ObjectTrait.php and lowering S3_UPLOAD_PART_SIZE from 500MB to 250MB.

Give it a try…