Force nextcloud client to retry/resume again after connection is broken

My client is synchronozing a huge amount of data, 1500 GB. In the process, the Nextcloud client encounters many broken connections, with messages such as “socket operation timed out”. The problem is that when such a message appears, Nextcloud takes a long time to retry/resume synching again. I don’t know how long it takes, more than ten minutes, maybe one hour or so. The only way of resuming right now is by manually clicking on “Force Sync now”, or restarting the the nextcloud client.
How to force the Nextcloud client to automatically retry/resume again after one minute (or ten minutes)?

Nextcloud Client: Version 3.9.1, osx-22.5.0.

Server:
$ ./occ support:report
Nextcloud version: 27.0.1 - 27.0.1.2
Operating system: Linux 6.2.16-4-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-4 (2023-07-07T04:22Z) x86_64
Webserver: Unknown (cli)
Database: mysql 10.5.19
PHP version: 8.1.20
Modules loaded: Core, date, libxml, openssl, pcre, zlib, filter, hash, json, pcntl, Reflection, SPL, session, standard, sodium, mysqlnd, PDO, xml, bcmath, calendar, ctype, curl, dom, mbstring, FFI, fileinfo, ftp, gd, gettext, gmp, iconv, igbinary, imagick, imap, intl, ldap, exif, mysqli, pdo_mysql, Phar, posix, readline, redis, shmop, SimpleXML, sockets, sysvmsg, sysvsem, sysvshm, tokenizer, xmlreader, xmlwriter, xsl, zip, Zend OPcache

I am currently trying a workaround, with these settings in nextcloud.cfg:
remotePollInterval=30000 (currently and default: 30 seconds)
forceSyncInterval=300000 (currently 5 minute, default: 2 hours)
fullLocalDiscoveryInterval=300000 (currently 5 minute, default: 1 hour)
notificationRefreshInterval=300000 (currently and default: 5 minute)

1 Like

What happens on the server when the socket operation times out?

The performance can increase a lot by configuring the database caches and additional caching servers:

https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/linux_database_configuration.html

https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/caching_configuration.html

I think it’s better to target the real issue rather than patching the behavior down the line. The desktop client is probably slow to check large folders. There are already a few github issues, this one addresses the upload speed:

check out what parts concern your setup. The issues are a bit complex since there is a server and a client-side involved.

Interesting, this could make sense for initial upload settings to poll less often. Not sure if that can be interesting for the client in general or if that makes the whole sync solution more complicated.

I already changed php: /etc/php/8.1/cli/php.ini. It helped.
post_max_size =16M → changed to 160M
upload_max_filesize =8M → changed to 80M

The client configuration did not solve any issue. It still waits for more than one hour when a connection is broken.
remotePollInterval, forceSyncInterval, fullLocalDiscoveryInterval, notificationRefreshInterval

Errors in the server:

The expected filesize errors are often some misalignment of timeouts and accepted file sizes between webserver and php-fpm. For owncloud they had a list of such issues:

This is very old and from the time of Nextcloud 9. I’d check this with the webserver sample configuations in the documentation. I’d check that the max_size settings are also well aligned with the chunksize-settings of the client:
https://docs.nextcloud.com/desktop/latest/advancedusage.html?highlight=chunk
and server side:
https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/big_file_upload_configuration.html?highlight=php%20timeout

For the deadlock, make sure you use redis as file-locking cache, that is much faster for this purpose than your normal DB.

This is just to avoid to get the initial fails of transfer so the client does not have to start over again.

If you start again, I’d also check the server side, perhaps the client sends a lot of queries to the server in order to check what has already been uploaded. In this case db optimization and caching will help a lot.

If that doesn’t help, perhaps the client logfiles can. Sometimes there are obvious errors popping up. But to some extent, the process can take a while, not sure what is considered “normal”.

I have done some tests a long time ago by setting up a client and a large number of files, and just by the configuration went from about 120-150 files/min (Upload performance OC 8.2.1 (50 mysql queries per file) · Issue #20967 · owncloud/core · GitHub) to about 1000 files/min.
The sample set was 10 000 empty files.