Endless sync of untouched files

Dear all,

I am using owncloud/nextcloud for a while, but always had the same problem.

The clients (owncloud did it, and now nextcloud does the same) constantly syncs files, which I haven’t touched for month. Beside the bandwith problem, I get constant notifications that “nextcloud has synced X files”


I can’t believe that I am the only one having the problem, but I can’t find anything online - mostly because I can’t really describe it for google


Has anyone an idea how to deal with this problem? Is there a logfile somewhere?

I am using the most recent nextcloud server and client
 but is an issue for me for a while already


Thanks,
Jan

Can you tell us the version of the client and of the server you are using?
Anything in your server-logfiles?

Can you try to connect to demo.nextcloud.com and see if the client does the same thing?

Sorry for replying such an old thread but this is the only relevant post to the issue I encountered.

Mine is a NCP 1.5.0.3 deployment with Nextcloud 24.0.5 and the issue only happens with one Win11 laptop with Nextcloud desktop client 3.6.0. Among roughly 50G of files it synced with server, 5 of them got repeated sync every time the system boots. Those files were nothing special(30M ~ 90M each) and haven’t been touched for months. This is quite annoying as it consumes my limit bandwidth. In fact, I didn’t notice this until the reminder email from my service provider yesterday telling that mine is close to monthly quota.

  • Initially I suspected this was another form of “mtime” issue. But the script from https://github.com/nextcloud-gmbh/mtime_fixer_tool_kit showed everything was normal.
  • The file hashes of the local files and the ones on server were identical.
  • Nothing weird can be told from server log, only indicating that client initialized a sync request to download the file.
  • Since my server bandwidth usage jumped a lot during last month, it seems to correlate with with Nextcloud Desktop 3.6.0 release an month ago.

While I was reading Help document of Desktop Client to see how to find client logs, I got an notification of new desktop client release v3.6.1. Although the change list doesn’t list anything specific, I decided to upgrade to see whether it helped.

To my joy, it did!
Not sure if it were indeed some nasty bug got fixed or the reinstalling process during upgrade did the trick, 3.6.1 desktop client no longer endless request downloading.

Well, I celebrated prematurely. It still has issue after system reboot.

By examining the debug log generated by Desktop Client, I notice the ones that got endless sync has below pattern:

2022-10-20 18:48:47:734 [ info sync.discovery C:\Users\sysadmin\AppData\Local\Temp\2\windows-11938\client-building\desktop\src\libsync\discovery.cpp:350 ]:	Processing "ć…ŹćŒ€èŻŸ/情ç»Șçš„èĄšèŸŸäžŽè°ƒèŠ‚/情ç»Șçš„èĄšèŸŸäžŽè°ƒèŠ‚20190517/情ç»Șçš„èĄšèŸŸäžŽè°ƒèŠ‚.pptx" | (db/local/remote) | valid: false/true/true | mtime: 0/1558067438/1558067437 | size: 0/101022184/101022184 | etag: ""//"3d35a295118b7f7fdecc73c3d8b94520" | checksum: ""//"" | perm: ""//"WDNVR" | fileid: ""//"00006706ocf2vyrcnxmu" | inode: 0/178789/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeFile/CSyncEnums::ItemTypeFile | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked

More precisely, the error pattern is (db/local/remote) | valid: false/true/true | mtime: 0/1558067438/1558067437 | size: 0/101022184/101022184.
I guess it means for some reason, the file doesn’t have corresponding mtime and size in database.
Not sure how these files got into this limbo status. My best guess is a bad client like this?

https://help.nextcloud.com/t/desktop-client-3-4-0-destroys-local-time-stamp-and-keeps-uploading-data-to-server/128512/48

Anyway, due to my lack of knowledge of SQL, I ended up:

  1. Stop local Nextcloud Desktop client
  2. Move the files with above mentioned pattern to another local folder
  3. Delete the files using Nextcloud Web from the server.
  4. Run sudo -u www-data php /var/www/nextcloud/occ files:scan --all to ensure the file and database integrity.
  5. Re-upload the backup files.

Though it consumed a bit more bandwidth, it finally put an end to the problem.

Hope my experience can help.