I am trying to get an accurate benchmark for file upload.
From googling a bit, I found the following curl command: time curl --progress-bar -k -u "<user>:<password>" -T /path/to/test.dat "https://nextcloud.my.domain/remote.php/dav/files/<user>/test.dat" | cat
This works, but it is significantly slower than when I try uploading the same file via the web interface. (Which I timed with a stopwatch) For a 1 gig test file, uploading via web interface takes ~1 minute, whereas curl takes 3 minutes.
My desktop OS is Debian 12, uploading via firefox. My nextcloud setup is using a reverse proxy.
Why such a huge difference between curl and the web interface?
I would prefer to benchmark via CLI. (Without getting into too much irrelevant detail here, I will be running various workloads on my servers, other traffic to my reverse proxy, etc, etc, and I want to ensure the upload performance stays above a threshold while these other workloads are running, so I need a way to automate benchmarking.)
What’s a way to run an upload benchmark command that has similar performance to upload via web interface?
The Web UI (web client) as well as all the official clients will use chunking for larger files (and parallel http transactions). They also will use buik upload functionality for tiny files to minimize per transaction overhead (which is considerable on small files).
Both uploading modes can be orchestrated via curl. See the Dev ManualWebDaV section.
From reading the documentation you linked, it looks like it’s up to the end user to manually split the file into chunks, upload each chunk one at a time via a curl command for each chunk, then reassemble the chunks on the nextcloud server via move. Is that correct?
If this is the case, no problem, I can write a bash script to do this, but just wanted to confirm this is the correct method as it seems a little bit odd from an end user perspective. (For example imagine a nextcloud user with no scripting/dev experience that just needs to upload a file to nextcloud via curl)
For my testing purposes, if I am trying to replicate the behavior of uploading a file to nextcloud via web interface in firefox, how big should each chunk be, and how many chunks are uploaded in parallel?
Apologies if I’m missing something here. Thank you!
Yes. It’s pretty typical (even outside of Nextcloud and WebDAV). If one were to want to upload a large file to, say, an S3 environment via curl, they’d also have to chunk things themselves.
For my testing purposes, if I am trying to replicate the behavior of uploading a file to nextcloud via web interface in firefox, how big should each chunk be, and how many chunks are uploaded in parallel?
In released versions (well v30), the web client’s default max chunk size is 10 MiB and concurrency is 5. In older versions concurrency is 3.
P.S. In upcoming v31 (unless something changes) the default max chunk size is 100 MiB.