I am trying the Nextcloud Desktop. It does work but it seems to choke on files > ~300MB. Just would like to know if this is normal, expected behavior. I did search for related posts and the ones I found were all years old so wondering if this is still the case. Or, if there are things I can do to fix/adjust this.
To clarify, the situation was I had the Desktop sync app running, I dropped some files into the ~/Nextcloud folder, and most of the files sync’d fine but those that were > ~300MB failed with
(NOTE: I don’t usually sync files that large but only encountered this as I was using it to transfer files from my Dropbox location to Nextcloud and it choked on the larger files.)
I did play around with the changing the chunkSize on the client and max_chunk_size on the server – reducing it to 100MB and then 50MB – as well as the timeout values in my reverse proxy (caddy) but didn’t seem to make any difference.
System info:
Nextcloud Desktop: 3.11.0-1
Nextcloud Server: Nextcloud AIO v32.0.3
Operating system: Server - Debian 13, Client - Linux Mint 22.2
Docker: Docker version 26.1.5+dfsg1, build a72d7cd; Docker Compose version 2.26.1-4
Web server: Nextcloud AIO (Apache 2.4.66)
Database server: Nextcloud AIO (PostgreSQL 17.7)
Reverse proxy: caddy v2.10.2
PHP version: Nextcloud AIO (v8.3.28*)*
Installation method: Nextcloud AIO via docker
Cloudflare is my domain registrar and the nextcloud entry is proxied
ISP: Sparklight 1Gpbs down/50Mbps up
Network: 1Gbps to computers, 100Mbps to IoT devices
Hi,
At the moment it’s hard to comment on “expected performance” because the support template isn’t filled in — there’s no information about the server setup, Nextcloud version, desktop client, reverse proxy, or network conditions. Without that, it’s not really possible to judge what is normal or not.
This test show that large file synchronization is achievable with proper server and proxy configuration. Actual performance will strongly depend on the specific setup.
@vawaver I have updated my OP with additional info based upon the support template. I will do some further testing to see if I can capture pertinent log entries. One thing that would help: I cannot seem to find the log entries for the Desktop sync client. When it fails, it shows an error that says “see logs”. I am not clear if that is some local log on the client system or the logs in the server.
The issue was Cloudflare imposing a request timeout which closed the connection about 3 seconds into the transfer. Fixed by either: 1) Setting the entry to DNS-only, or 2) leave it Proxied and add these in NC Desktop’s nextcloud.cfg to get it the chunks below the timeout:
The second option does affect overall performance but it is barely noticeable – if at all – with files less than 1 GB. Setting it DNS-only results in better performance but removes any mitigation Cloudflare provides.
What I did
Based upon the info from the article from @vawaver and a few other sources, I updated my Nextcloud docker-compose.yml:
- NEXTCLOUD_MEMORY_LIMIT=4096M # Increase memory space
- NEXTCLOUD_UPLOAD_LIMIT=32G # Increase max upload limit
- NEXTCLOUD_MAX_TIME=7200 # Increase max timeout to 2hrs
And the entry in my Caddyfile:
reverse_proxy http://nnn.nnn.nnn.nnn:nnnnn {
# Long timeouts to prevent "Connection closed" during chunk uploads/assembly
transport http {
read_timeout 24h # Max wait for next read from backend
write_timeout 24h # Max wait for next write to backend
response_header_timeout 24h # Wait for response headers
expect_continue_timeout 24h # For Expect: 100-continue in chunked requests
}
}
But these did not eliminate the issue. Further uncovered that the Cloudflare Proxy free level imposes a 100-second request timeout. As noted at top, changing this to DNS-only, resolved the transfer failures.
Setting the Desktop app to chunk very small sizes 5MB-10MB would let the large file complete sync but I could see that performance was lower.