After migration - desktop client resyncing all files

Nextcloud version: 27.1.2
Operating system and version: Ubuntu 22.04 LTS
Apache or nginx version: Nginx
PHP version: 8.2

The issue you are facing:
I am following the documentation (same rsync, same dump mysql) but when pointing to my new server the desktop client is redownloading all the files.

Steps to replicate it:

  1. Follow documentation for migrating app, db and data
  2. Point to new server
  3. Nextcloud is redownloading all the files

I am using groupfolder, is that the reason?. I tried to use:
maintenance:data-fingerprint but same behavior. I tried also occ groupfolders:scan --all before putting in maintenance --off but again the desktop client is redownloading all the files.

I need help.
Thank you

How did you migrate / copy the files?

You have to make sure that the files keep their original timestamps when you copy them to the new server. If you are using rsync, this can be done by running it with the -t option, as described in paragraph 4 of the documentation.

@bb77 Thank you for your reply, I am using the exact same command as stated in the documentation: rsync -Aavx (-t is included when using a).

Yesterday after finishing the importation of the database, the final rsync and pointing my /etc/hosts to new instance, the timestamp was kept for folders and files in the web interface BUT size was in **Pending** state.

Today new instance is showing the size of the folders (no more Pending) BUT the timestamp is now lost (all folders are Modified 10 hours ago). I guess the cron scanned the groupfolder and changed something?

Anyone migrated to a new server while using groupfolder app?

I don’t have a solid answer offhand, but did the datadirectory path change when you migrated server-to-server?

I don’t recall offhand the internal specifics of groupfolders, but if the datadirectory changed it’s possible that the oc_storages change is also required when not using S3:

Since groupfolders is it’s own beast, it might be worth dropping into the dedicated repository for it and documenting your situation/use case.

the reason must be here:

the client has no way to know "https://mynewserver.tld" is a copy from "https://myoldserver.tld" and must resync all the data. I would expect there is no real “re-download” but the client must iterate over each file and understand the files are identical.

@wwe Same DNS I am just adding it to my /etc/hosts for QA. Right now, the desktop client is not just reindexing, it is downloading (I can see the bandwidth starting to sync TBs of data).

@jtr Super hint, I will try that, indeed I can see a new entry in oc_storages so I will edit fstab to reflect what I have in prod. Will report back!

There’s hope…

What about using occ to re-scan through all of your files and folders before syncing.

Fwiw, I avoid group folders due to always having difficulty with it. Ymmv

@just groupfolder is a must when you need advanced folder permissions, the problem was because the absolute path for the data folder changed, and groupfolder doesn’t like that (new entry is created inoc_storages).

I changed my /etc/fstab to reflect what I have currently, but another solution is to update the oc_storages table.

I am not 100% done with the QA, but the web interface is not showing Pending anymore (I see the correct folder size) and the desktop client is not resyncing all files. Looking good!

Thank you everyone for taking the time, @jtr I owe you one :beers:


This topic was automatically closed 8 days after the last reply. New replies are no longer allowed.