Recommendation for implementing an Offsite Backup

I have a working instance of Nextcloud. It uses Nextcloud All In One with the base OS being Nixos, I also uses an Nginx revised proxy. My data directory is on a ZFS dataset. I also created a “backup” dataset in ZFS where I currently save any Nextcloud All In One backups.

I would like to implement and off-site backup. I have some hardware for this, and old desktop with two hard drives for crating storage in raid, most likely ZFS. I also have an offsite location for this, where I can forward any ports that are needed, I would like this to be secure. Would like to have automated monthly backups and I don’t see a need to keep anything for more than a year.

I would think there are a lot of ways to accomplish this and unfortunately, I have limited time to investigate them. Can anyone suggest some strategies for this? Does the All In One have a function for doing this? I know of ZFS send and receive and there are tools to simplify this, but I have never used them before are these a good strategy?

Thank you for your help

1 Like

Nextcloud AIO has a backup implemented:

Yes. Some things I try to consider:

  • having a backup that pulls the data (so that if someone that hacks the server, does not get access to the backup storage)
  • try to some incremental backup, I have used rsnapshot
  • backup encryption
  • restore time, or split data in data you want to be able to restore quickly (stuff you currently work on, calendar), and data that can be restored over a few hours or days (photos)

Instead of having a backup on a RAID system, I prefer having to separate ones in different locations. And perhaps offline ones as well.

I read about one, I wanted to check in more detail: https://restic.net/

And, in the end, what counts, try to restore a test session from a backup. So you know how the whole chain of recovery works.

Thank you for the suggestions.

I have gotten the Borg backup within Nextcloud All In One (AIO) working and saving to a backup directory on the local machine. Is there a way to change the permission and group owners of the backup files? I am not too keen of the idea of having to use the root user to access the files to copy them to a remote location. Ideally, I would like to be able to set a group and have any member of it have the same access to these files as root. Is there a configuration with in AIO to do this? Otherwise, I was thinking of using something like a sticky bit and ACL to do this, i.e. something like:

sudo chgrp -R 1900 /tank/backup/borg/
sudo find /tank/backup/borg/ -type d -exec sudo chmod 2770 {} \;
sudo find /tank/backup/borg/ -type f -exec sudo chmod 660 {} \;
sudo setfacl -d -m g::rw /tank/backup/borg/

Disclaimer: I am no expert on this and not really sure if I am doing this right or have done much testing. So if anyone know better, please correct me.

As for copying the files to a remote location, I was considering having the remote machine pull them using rsync over ssh. Below is a scrip I modified from this:

#!/run/current-system/sw/bin/bash

# Please modify all variables below to your needings:
REMOTE_USER="user"
REMOTE_HOST="example.com"
PRIVATE_KEY_PATH="/root/.ssh/id_ed25519"
SOURCE_DIRECTORY="/tank/backup"
TARGET_DIRECTORY="/tank/remote-backup"

########################################
# Please do NOT modify anything below! #
########################################

# Check if TARGET_DIRECTORY exists
if ! [ -d "$TARGET_DIRECTORY" ]; then
    echo "The target directory must be an existing directory"
    exit 1
fi

# Check if TARGET_DIRECTORY is writable
if ! touch "$TARGET_DIRECTORY/testfile" 2>/dev/null; then
    echo "Cannot write to the target directory"
    exit 1
else
    rm "$TARGET_DIRECTORY/testfile"
fi

# Check SSH connection
if ! ssh -q -i "$PRIVATE_KEY_PATH" "$REMOTE_USER@$REMOTE_HOST" exit; then
    echo "SSH connection failed. Please check your remote host, user, and private key."
    exit 1
fi

# Check if SOURCE_DIRECTORY exists
if ! ssh -i "$PRIVATE_KEY_PATH" "$REMOTE_USER@$REMOTE_HOST" test -d "$SOURCE_DIRECTORY"; then
    echo "The source directory does not exist."
    exit 1
fi

# Check if SOURCE_DIRECTORY is not empty
if ! ssh -T -i "$PRIVATE_KEY_PATH" "$REMOTE_USER@$REMOTE_HOST" << EOF
    if test -z "\$(ls -A "$SOURCE_DIRECTORY/")"; then
        exit 1
    fi
EOF
then
    echo "The source directory is empty which is not allowed."
    exit 1
fi

# Check if lock file exists in SOURCE_DIRECTORY
if ssh -i "$PRIVATE_KEY_PATH" "$REMOTE_USER@$REMOTE_HOST" test -f "$SOURCE_DIRECTORY/lock.roster"; then
    echo "Cannot run the script as the backup archive is currently changed. Please try again later."
    exit 1
fi

if ssh -i "$PRIVATE_KEY_PATH" "$REMOTE_USER@$REMOTE_HOST" test -f "$SOURCE_DIRECTORY/aio-lockfile"; then
    echo "Not continuing because aio-lockfile already exists."
    exit 1
fi

# Attempt to create a lock file in SOURCE_DIRECTORY
if ! ssh -i "$PRIVATE_KEY_PATH" "$REMOTE_USER@$REMOTE_HOST" touch "$SOURCE_DIRECTORY/aio-lockfile" 2>/dev/null; then
    echo "Failed to create a lock file in the source directory. Please check your permissions."
    exit 1
fi

# Proceed with rsync
if ! rsync --stats --archive --human-readable --delete -e "ssh -i $PRIVATE_KEY_PATH" "$REMOTE_USER@$REMOTE_HOST:$SOURCE_DIRECTORY/" "$TARGET_DIRECTORY"; then
    echo "Failed to sync the backup repository to the target directory."
    exit 1
fi

# Remove the lock file from the source directory on the remote host
ssh -i "$PRIVATE_KEY_PATH" "$REMOTE_USER@$REMOTE_HOST" rm "$SOURCE_DIRECTORY/aio-lockfile"
rm "$TARGET_DIRECTORY/aio-lockfile"

Disclaimer: I am no expert on this and not really sure if I am doing this right or have done many testings. So if anyone know better, please correct me

Thank you