I’m running two Nextcloud instances with <10 users on self managed IaaS hosted at a cloud provider, and want to handle possible data loss.
I’m using snapshotting on a weekly base going round robin - however this would not handle data loss at the cloud provider.
I’m running Nextcloud on two different directories for security reasons (let’s call them nx_data and nx_www). All user data is on nx_data. nx_data is rsynced to a remote site including an in time copy of the database.
My question:
What data is mandatory to recover users data from a server failure? Risk is low - i only want to ensure that users data can be recovered - even if this would require to set up a new instance. Syncing the whole instance with all files would incure high costs as of higher change rate and high retrieve costs at the cloud providers (same with BMR at cloud provider)
Thx, but that doesn’t answer the question, as i don’t want to backup the whole instance
Perhaps you can rephrase your question or clarify the scenario you’re thinking about in that case?
You can’t recovery without the actual data.
Neither of the the links cover backing your “whole instance”. They cover the minimum bits of data that Nextcloud Server needs to recover:
user files
databases
config
Depending on your use case, way you use Nextcloud, and what you deem “acceptable to lose” you might be able to exclude:
database
config
But the bulk of the “data” is going to be user files in most cases. So cutting these last items isn’t likely to be particularly beneficial. And don’t overlook that various apps store lots of user data within the database (not in files). So the database contains user data as well.
You already mentioned rsync so I figured you were also already operating within a scenario where the only things you’re pushing over to your replication site/repository are changes.
How/if you also backup/recovery your OS itself is up to you and doesn’t really impact Nextcloud at all.
Thanks a lot - that was exactly what i was looking for - i’m now running the following procedure to recover from possible data loss:
Snapshot the filesystem via cloud providers snapshotting mechanism. And I squeeze the database until the snapshot is completed. This procedure is done on a regular base with weeklys, monthlys and dailys in round robin.
Once a month (based on the usage pattern), i create a dump of the database to a local file and compress this file. The file ist transfered to a remote site (local QNAP filer running Ubuntu with rsync) together with config.php + user files using rsync.
Guess this should prevent me from all possible data loss issues - at least it would enable me to recover from a failure at the cloud providers data infrastructure