Backup Strategy using hardlinks

Hey there,

I was thinking about backup up a nextcloud instance using the following strategy:

  1. Take Nextcloud in Maintenance mode
  2. Make a DB backup.
  3. cp -al nextcloud-data to temporary directory
  4. Take nextcloud out of maintenance mode to avoid longer downtime
  5. rsync the hardlink copy of the data dir to a remote directory
  6. Delete hardlink copy of data dir.

The question is will this work, will the backup always be in a synced state (DB and filesystem)?
This should be fine for new files and deletes. These won’t be mirrored into the hardlinked copy.
It now depends on how file updates are handled by nextcloud. I would assume it creates a new file and doesn’t update files in place. This would break the hard link and would leave the copied file alone, which is what we want during the backup process.

Are there any other operations which I missed, which would cause the filesystem and db backup to get out of sync?

Why are you using hardlinks? I don’t think a hardlink is a proper backup technique

Perhaps rsnapshot, coupled with a sql dump script would do the job? See

combine these two together and you will have a full incremental backup of files + database

Thanks for the suggestions, but I’m afraid they don’t answaer what I was looking for.
The point is that all this has to be done while nextcloud is in maintenance mode, otherwise the db backup and the filesystem snapshot end up in an inconsistent state.

Depending on the size of the nextcloud instance this sync can take quite a while leaving the nextcloud unavailable to users during that time.

So the idea was to use the cp -al (which copies a directory structure hardlinking all contained files) command to make a ‘poor man’s snapshot’ of the data-dir state. whcih can then be synced with rsync/rnapshot/whatever.

You can add occ maintenance:mode --on and then occ maintenance:mode --off somewhere in the script and that should do the trick

I did some experiments:

  • Updating a file locally and uploading through the sync-client breaks the hardlink.
  • Updating through the web-interface does not.

So I guess this isn’t really useful after all.

What I think I will be doing is an rsync before enabling maintenance mode and then another one right after creating the db snapshot with maintenance mode enabled. This should get the time mostly down to a few seconds.

@MorrisJobke: You talked about a read only mode for backups during the conference. Any plans to implement that yet?

Is this your own server? You could use LVM that allow you to create snapshots of your filesystem, or directly use btrfs or ZFS.

No plans yet - but I opened an issue:

That was why I recommended rsnapshot. The first backup will take a long time, but after that, it’s fast Incremental backups, coupled with a database dump. The cmd_preexec, and cmd_postexec could be used to turn maintenance mode on and off, and trigger the sql dump. Unless your data churn is insanely high, the backup should take seconds.

@tflidd idea of using filesystem snapshots is also good, especially if the file system contains the data folder, and the sql files.

For the database it is still better to use mysqldump.

I recomended rsyncbtrfs for snapshots btrfs.

Maybe this is an option. It is a web based application which is using rsnapshot & rsync. You can develop your own scripts and can run it as a pre- or post script.