Desktop App Sync Connection/Size Limit

I’m having an issue with syncing large files via the desktop application. I have seen other similar issues related to cloudflare but I am not using any cloudflare services so I don’t think that’s related.

I had an old v23 server that was working without any sync issues (including large files). I also have a reverse-proxy server. I had an issue with the v23 server and had issues bringing it up-to-date so I abandoned it and spun up a new v30 server. Now everything seems to be working without issue using the web interface (including large file uploads).

But when using the desktop app to push large files, it seems to sync 100-200MB before showing a “connection closed” error.

I connected the desktop client directly to the server using the internal dns hostname with no change so I don’t think it’s the proxy config. This also eliminates any external factors like clouldflare (though I’m definitely not using cloudflare). Because of this, I suspect it to be an issue on the Nextcloud server instance itself.

my apache php.ini config includes

max_execution_time = 3600
max_input_time = 3600
upload_max_filesize = 10G
post_max_size = 10G
memory_limit = 2G

When I tail both the apache (nextcloud) and nginx (reverse-proxy) logs, I don’t really see anything unusual. Just a bunch of "PUT"s followed by a “PROPFIND”

I’ve done some digging but most of the promising results are related to cloudflare issues. Is there someone that might be able to point me in the right direction here?

Hi stevezemlicka,

Pity you had to set up everything anew, but nice that you are up to date :tada:

With a direct connection, 200 MB is reached within seconds, is it not? How long does it take before giving a timeout?

There are a few more timeouts that may be set, @xGuys listed quite a few in a similar post to yours today.

As with a mere 200 MB you probably are not hitting a file size limit, your topic title could attract more attention if it mentioned “timeout” instead of “size limit”. Maybe you could change it after concluding it is not dependent on the size of the files.

Thanks, yea it was a hassle…but less than all them upgrades lol.

Yea, it hits the 100-200MB within about 10 seconds or so. It does make progress on the big files if I keep hitting “Sync Now”. It sort of picks up where it left off, does another 100-200MB and then drops again. It had no trouble with 10GB of data as long as no file is bigger than about 200MB.

I figured since it was hitting this in a few seconds, that there wasn’t a timeout issue but rather something size related.

Ten seconds is fast indeed. There may be hints in the client log. Thinking of the client, it is not something silly as the sync size limit, is it?

image

It could be that in setting it up for your new installation, some values jumped back to their (lower?) defaults

To get a useful sync log, click the “Create debug archive” button.

I kept hitting “Sync Now” and eventually the file in question synced. This was a 4.5GB file. The weird thing was once it got to about 3.8GB, it went the rest of the way without a problem.

I uninstalled the nextcloud client and cleaned up the local files (including config). I reinstalled and it pulled all the data without issue (including pulling the file previously in question). However when adding another multi-GB file to push, it had the same problem.

So, I grabbed a different laptop that I had recently installed opensuse on and installed the nextcloud client. This also pulled all data without issue. However when pushing a 5.8GB file, it only got “disconnected” 3 times.

I suppose it’s possible I have two issues. One with the client or something specific to the first laptop. And a second, less prevalent, issue with a server timeout setting that caused the 3 disconnects on the other laptop.

FWIW, I think I can verify that I’m not having wireless/networking issues because I am able to maintain a 4-8ms ping to both the proxy server and the nextcloud server throughout the tests.

You’re not inflicted by Cloudflare, are you?

I was reading another thread that has “size limit” in its title; there @jtr mentions:

I wish that were the case because you’re right, that seems like it perfectly explains the symptoms. I’ve looked at implementing that in the past for security purposes but never did it. I can also isolate this behavior to within my private network (private DNS with all hosts resolving to private IPs) which, it seems, would eliminate any external factors.

That would have been my suggestion. Queer.

I’m out of suggestions, but I’d be terribly interested in reading the ultimate cause!

I switched the nextcloud server to using fpm. My initial test (clean/new client connecting directly to nextcloud and bypassing proxy) worked to push a 5.8GB file. This seems promising so I’m in the process of testing the other scenarios.

What’s weird is that there was little to no consistency (at least in the clean/new client setup) in the time or size of the cutoff. But it seems that there was something in my fcgi config/setup that was causing issues with only the desktop client.

The only downside I’m seeing so far is that previously, my push would saturate my client network connection and would remain relatively steady. With FPM, there’s much more speed fluctuation resulting in a ~10-20% performance hit overall on large file uploads. I may need to look into further optimization tweaks to FPM.

The Desktop client currently defaults to a maximum chunk size of 5 GiB.

One guess: Does you Apache access and/or error logs indicate a 413? If so, may be LimitRequestBody in Apache and an interaction with the Desktop client’s default maxChunkSize of 5GB. See here and here.

What version of the Desktop client?

I tested the original system (the one that seemed to disconnect every 100-200MB) and it worked without issue including through the proxy.

I’m confident in saying now that it was due to the fcgi implementation. There was likely a way to fix that without switching to FPM but I couldn’t find it. Switching to FPM wasn’t too difficult so I’m taking this as the solution since I have no particular requirements from a PHP processing perspective other than functionality.

1 Like