Hi
Nextcloud 30
Webserver is Apache
PHP 8.2
OS Ubutnu 22.04
I want to know how configure my nextcloud to upload 1 TB to 2 TB file (Single file) or multiple files that make such size. Can someone tell me all steps like how to configure nextcloud, php and apache to accept such sizes
Sorry to say it like this, but the best way is using one of the clients (adroid app, iOS app, Linux/windows/mac desktop app).
In principle, I personally do not consider data volumes of 1 TB to be suitable. It also seems to me that it could be a backup archive, for example. Nextcloud is neither a good backup service nor are huge backup archives useful.
Describe what you want to achieve.
tflidd
January 31, 2025, 2:47pm
4
Not sure how much you have looked into it. In principle for large file uploads, there is:
https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/big_file_upload_configuration.html
There you can perhaps try to figure out on your system what you can reliably upload.
Furthermore, there is chunking of files possible in the desktop client:
https://docs.nextcloud.com/desktop/3.2/advancedusage.html?highlight=chunk
But there is an API that you could directly use:
https://docs.nextcloud.com/server/latest/developer_manual/client_apis/WebDAV/chunking.html
Check out the bug tracker as well, there are a few topic, e.g. for the performance:
opened 12:35PM - 02 Sep 24 UTC
2. developing
feature: dav
performance 🚀
🍀 2025-Spring
hotspot: file transfer performance
# Motivation
Chunked upload is useful in many cases, but also slower than upl… oading the entire file directly.
Overall it is slower because multiple network requests have to be made.
The speed of requests varies during their lifetime, as it is slower at the beginning and then flattens out at the maximum over time.
The smaller the requests are and the more we make, the worse performance penalty.
Thus to increase chunked upload speed the size of chunks should be increased.
A single upload using cURL is the upper limit of the possible upload speed for any configuration and upload method.
A chunked upload with the chunk size equal to or greater than the file size represents the upper limit for chunked uploads as it only uploads a single chunk.
While reaching the former would be nice, only the latter is achievable (without general performance improvements in WebDAV regardless of the maximum chunk size) and thus represents the theoretical goal.
# Testing methodology
## Input
```
dd if=/dev/random of=1G.bin bs=1G count=1
```
## Scenarios
All tests are running on a local instance using the PHP standalone web server with 10 workers and no extra apps enabled.
The machine has a Ryzen 5 5800X (8 threads, 16 cores), 48GB RAM and a NVMe M.2 Samsung 980 SSD 1TB.
Hardware should not be a bottleneck on this setup and external networking can not have an effect either.
### 1. cURL single upload
Take the `Real` timing value.
```
time curl -X PUT "http://localhost:8080/remote.php/webdav/1G.bin" -u admin:admin --upload-file 1G.bin
```
Runs:
5.412s
5.223s
5.100s
Average: 5.245s
Note: I once saw an outlier that only took about 4.7s, but this never happened again.
### Chunked upload via browser
Open Firefox Devtools and filter network requests by `dav/uploads`.
Upload `1G.bin` via web interface.
Take `Started` of first (`MKCOL`) and the `Downloaded` of the last (`MOVE`) request and subtract them (See the `Timings` tab of each request).
This includes some constant overhead for the `MKCOL` and `MOVE` requests which is not relevant for comparing chunked upload timing results as they all have the same overhead, but when comparing to the cURL scenario it accurately measures the overall time for the upload process.
According to https://firefox-source-docs.mozilla.org/devtools-user/network_monitor/throttling/index.html "Wi-Fi" throttling means a maximum speed of 15 Mbps.
Sadly this is the "fastest" speed one can select for throttling and there is no way to set a custom speed.
It should represent a worst-case, while most uploads are probably done with 2-3x that speed in the real world if the Nextcloud instance is not on the same network.
Adjusting the default maximum chunk size can be done in https://github.com/nextcloud/server/blob/796405883d214e6e4f3fa1497c036828efee0d62/apps/files/lib/App.php#L45
#### 2. Chunk size 10MiB (current default), unlimited bandwidth
Chunks: 103
Runs:
47.16s
47.65s
47.33s
Average: 47.38s
#### 3. Chunk size 100MiB, unlimited bandwidth
Chunks: 11
Runs:
8.53s
8.64s
8.63s
Average: 8.6s
#### 4. Chunk size 1024MiB, unlimited bandwidth
Chunks: 1
Runs:
6.37s
6.34s
6.34s
Average: 6.35s
#### 5. Chunk size 10MiB (current default), throttled "Wi-Fi"
Chunks: 103
Runs:
551.40s
551.40s
551.40s
Average: 551.40s
#### 6. Chunk size 100MiB, throttled "Wi-Fi"
Chunks: 11
Runs:
552.60s
549.60s
551.40s
Average: 551.2s
#### 7. Chunk size 1024MiB, throttled "Wi-Fi"
Chunks: 1
Runs:
568.20s
555.60s
553.11s
Average: 558.97s
# Conclusions
1. Upload speed in Nextcloud is very consistent regardless of upload method. Great!
2. Chunked upload in general takes about 21% longer in scenarios with unlimited bandwidth (scenario 1 and 4). Whether this overhead can be eliminated easily is not clear, but at least there is no hard limitation since both uploads are done through WebDAV and thus use the same underlying stack (also see other interesting findings section below).
3. In the current default configuration with unlimited bandwidth chunked upload is takes 646% longer than the maximum speed (scenario 2 and 4). By increasing the chunk size by 10x keeping the bandwidth ulimited it only takes 35% longer than the maximum speed (scenario 3 and 4). This is a 5.5x increase in total throughput (scenario 2 and 3).
4. In bandwidth limited scenarios increasing the chunk size has almost no positive (and no negative effect; scenario 5 and 6). This is expected as the slow speed at the beginning of each chunk is a lot smaller on relation to the overall speed or even exactly the same.
5. Increasing the chunk size helps uploads on fast connections while it has no downsides on slow connections speed wise. Slow networks can be correlated with unstable networks, so having fewer and larger chunks could result in a higher rate of aborted chunk uploads. This downside should be taken into consideration when choosing a new maximum chunk size.
6. A new maximum chunk size still needs to be figured out by collecting more data for different chunk sizes. It needs to hit a sweet spot of maximum speed with minimum size to account for the before mentioned drawback on unstable networks (basically the point of diminishing returns). This investigation was only to prove that we can increase the chunked upload speed.
# Other interesting findings
While uploading with a single chunk and unlimited bandwidth Firefox displayed that the request needed 637ms to send but had to wait 2.10s after that (reproducible). This might show that we have a bottleneck in processing the uploads on the backend side. Maybe it would be possible to stream the request data directly into the file while should cut down the waiting time a lot. It should be possible to profile these requests and figure out where the time is spent.
For single chunks the `MOVE` request still takes quite some time. I assume this happens because it concatenates the chunks while there is only one (which is slow because it has to read and write all the data). This case could be detected and the file moved without reading and writing it (which is not possible for all storages AFAICT, i.e. it needs to be on the same physical disk to take advantage of it). This only affects uploads where the file size is less than the maximum chunk size. Due to the current low limit it is not really noticable, but with a higher maximum chunk size this would affect a lot more and bigger uploads and could lead to quite a performance improvement for those.
Upload ETA calculations are all over the place due to the varying speed of uploading chunks over their lifetime. It could be improved by taking the overall time a chunk needs to upload and multiplying it by number of remaining chunks. This should be a lot more reliable as every chunk should have a similar upload characteristics. To smooth out the the value it could take into account the last 3-5 chunks.
With that, you can try to discover the limits. However, you might end up that Nextcloud is not the best solution. For transfer speeds, syncthing might be a good open source solution, or just some rsync-like transfer through ssh.
Sanook
January 31, 2025, 3:51pm
5
I wouldn’t waste too much time trying to figure it out in this case, but skip it right away and not upload the huge files with Nextcloud. Unless you enjoy experimenting of course.
system
Closed
May 1, 2025, 3:52pm
6
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.