Clone Nextcloud

It is more an idea than a question.

Before every update, I’m afraid that it could go wrong and that I have to spend the next few hours looking for a solution. Honestly, it hasn’t happened to me very often, but there are a few cases here in the forum and actually I’m more concerned with the certainty that I can safely import the next update.

Wouldn’t it be practical to have a way to clone a complete Nextcloud instance on the web space?
So copy all files into a parallel or subordinate directory and either create a new database (will be difficult for most hosts to automate) or insert the tables with a new prefix into the original database.
Then the update could be tested with this clone.

Of course, the config.php of the clone would have to be adjusted and it may not be necessary to copy the user files.

It would be important that the clone script runs via PHP and does not require SSH access so that it can also be used by users who only have a web space account.

How do you like this idea?
Is it worth creating such a script, or does it even make sense to program the whole thing as an app?
Are there technical reasons why this might not work?

My skills are not sufficient to program such a script well enough. If I should try it, I would be dependent on ideas and help.

Greetings Kolja

1 Like

I think that is not important. I think for bigger installations it would not work. You need for test a similar environment.

Yes. But perhaps it is better to regularly backup and restore the nextclud installation. Than a upgrade-fail or a hdd-crash or stupidity in administration or usage does no matter.

https://docs.nextcloud.com/server/19/admin_manual/maintenance/backup.html

Restoring backup — Nextcloud latest Administration Manual latest documentation

1 Like

Thanks for your advice.
Can I ask a little more?

What role does the size of the installation play?
Although I suspect that such a script is needed by those who don’t have large clouds with many users.

Why isn’t it the same enviroment?
Except for the path and the prefix, everything is the same.

The backup & restore paths from the manual require SSH access and are therefore not suitable for many users.

BTW, the NC Updater already copies the data of the “old” installation into the data directory (without the user data files), is there a backup of the database somewhere?

edit: no!
Create backup: creates a backup of the existing code base in /updater-INSTANCEID/backups/nextcloud-CURRENTVERSION/ inside of the data directory (this does not contain the /data directory nor the database).
https://docs.nextcloud.com/server/19/admin_manual/maintenance/update.html

It is correct that users only with web space can not use the shell.
But why should a user with vps or home server use the inadequacy of a backup/restore or clone software of nextcloud installation only hosted on a webspace.

What role does the size of the installation play?

A webserver installation does not work. An with not more users a normal installation does not work, too.

Nothing beats regular backups but it’s easy to run a Nextcloud instance from a self-contained directory by using Docker Compose. With such a deployment, making an exact copy of the current instance is as easy as copying that directory. After that, you can try “risky” things and rest assured that you always can roll back to the previous state in no time. This method still needs file system access but it’s simple and it does the trick.

1 Like

I used SSH to copy my nextcloud installatin which I did via docker. (so no php solution). This script will copy a whole docker instance into a new one, serving on a new port number:

/usr/local/sbin/nextcloud-copy.sh

#!/bin/bash
function help(){
        echo usage: $0 domain_to port [domain_from]
}
if [ "$1" == "--help" ]; then
  help
  exit 0
fi
if [ "$1" == "" ]; then
  echo "no domain to copy to"
  help
  exit 0
fi

if [ "$2" == "" ]; then                                                                                                                                                                                   
  echo "no port"
  help
  exit 0
fi

if [ "$3" == "" ]; then                                                                                                                                                                                   
  echo "no source"
  help
  exit 0
fi

FROM=$3
DOMAIN=$1
PORT=$2

BASE=/var/kunden/docker-services
TEMPLATE_PORT=$(grep :80 $BASE/$FROM/docker-compose.yml |cut -d" " -f 8|cut -d: -f 1)
DOCKERIMAGE="${DOMAIN//.}_app_1";
DOCKER_COMPOSE=$BASE/$DOMAIN/docker-compose.yml

echo copying from $FROM to $DOMAIN
echo using $DOCKERIMAGE

if [ -x $BASE/$DOMAIN/ ]; then
  echo "folder $BASE/$DOMAIN/ already exists!"
  exit 0
fi

rsync -axX --info=progress2 $BASE/$FROM/ $BASE/$DOMAIN/ --exclude=volumes/html/data/*/files

cd $BASE/$DOMAIN/
sed -i "s|$FROM/volumes|$DOMAIN/volumes|g" $DOCKER_COMPOSE
sed -i "s|$TEMPLATE_PORT:80|$PORT:80|g" $DOCKER_COMPOSE
echo '      - "'$DOMAIN':10.77.77.101"' >> $DOCKER_COMPOSE

sed -i "s|$FROM|$DOMAIN|g" $BASE/$DOMAIN/volumes/html/config/config.php
mkdir -p $BASE/$DOMAIN/volumes/html/data
touch $BASE/$DOMAIN/volumes/html/data/.ocdata
cd $BASE/$DOMAIN
docker-compose up -d
sleep 20
for u in some user names that are not needed in the new copy; do
  deleting user $u ...
  docker exec --user www-data $DOCKERIMAGE php occ user:delete "$u" 1>/dev/null
done
docker exec --user www-data $DOCKERIMAGE php occ files:scan --all

My only problem was then that the installed ONLYOFFICE document server still points to the URL of the old docker image. This is stored in the database, I copied, so I have to edit the database with:

docker exec next-deveclabsde_db_1 mysql nextcloud -p$MYSQL_ROOT_PASSWORD -e "update oc_appconfig set configvalue='https://next.new-domain.org/apps/documentserver_community/' where appid='onlyoffice' AND configkey='DocumentServerUrl'"

could that be an option to clone an isolated home NC into a DMZ to allow access to selected files/content to be reachable from www?

Guys … what you need … the Rolls-Royce … is to have got a system with snapshots …
you need to have your operating system and data layer on snapshot based system.
With File-Systems like ZFS or btrfs … you can make restores in seconds.
With other snapshot technologies (depends on your environment) like lvm-snapshots … it takes more more time (depending on the size of the data).

Be aware, recovery from a full disaster, you may need to have replicated snapshots in place on the remote site.

Myself i’m using different layers of data protection … volume snaphots, file system snapshots, enterprise backup solution for the last resort case.

could you share how it looks like?
I’m building a TrueNAS SCALE running Nextcloud and so on.
May I can avoid reinventing the wheel :blush: .

Unfortunately, that does not help how to mirror an NC into a DMZ. Any idea :grimacing:?

TrueNAS has got it’s own ZFS-Snapshot capabilities.

ScreenShot175.

So, so just schedule your snapshots upon your requirements.

I think, i do not have got enough KnowHow/experiences about DMZ.

Maybe - it’s only a matter of Firewall rules … and of course with ZFS you can also ran your zfs replications commands - replicating from local to remote.

(zfs send / zfs receive)

Myself, i’m using zrep :

bolthole/zrep: ZREP ZFS based replication and failover script from bolthole.com (github.com)

ZREP is a ZFS based replication and failover script from bolthole.com

Got ZFS on two systems? Want to have (almost) idiotproof replication and master/servant switching between them? Install zrep with a single download and you’re almost ready to go.

Hope this information is useful for you.

I would rather say I expressed myself badly.

I want to run a second instance of NC in the DMZ that has the same user accounts etc. but only a predefined copy of data (only those which are not super-critical when NC is comprised).
It would be great if the NC in my LAN even syncs changes on the NC in the DMZ.

The NC is the one for file-sharing with externals and so on.

ah ok … may be i’m understanding a little bit better :slight_smile: (if not, i do apologize).

You can fine tune the sync requirements in ZFS/TrueNAS (Scale) and zrep.

In TrueNAS/ZFS you data are in pools (zpool).
The zpools has got eihter datasets or zvols (luns) (or both)

When you’re syncing (zfs send or zrep) the whole zpool, all datasets & zvols will be replicated, too.

But you can also determine, which datasets/zvols you want to replicate.
Example:

Replicate zpool:tank with dataset:data with snapshot:snap1 to remote zpool:spool &
dataset:ds01

zfs send tank/data@snap1 | zfs recv spool/ds01

example2:
replicate also the snapshot tree (all the snapshots from source pool: tank) to remote site:

zfs send -R tank/data@snap1 | zfs recv spool/ds01

for copying to remotes sites:

zfs send -R tank/data@snap1 | ssh remoteHost zfs recv spool/ds01

https://docs.oracle.com/cd/E19253-01/819-5461/gbinw/index.html

& of course … i want to prefer the GUI … i think, you could reach the same results with the GUI setup

we are getting closer :slight_smile: but I would like to choose in NC which files are copied to the NC in the DMZ.
I’m not sure if I can choose which files are going to be stored at which zpool.

Imaging I have personal bank statements and invoices of shared expenses.
Bank statements shall be stored on the LAN NC.
Invoices can be uploaded to the DMZ DC and than need to be sync to the LAN NC and vice-versa.
That might be possible but how to deal with app settings etc.?

If you need to have fine control on file level, then the preferred solution could be “rsync”.
rsync is - we can say - the gold standard doing migrations / copy operations on file level.
Within rsync you can control almost all, to get a consistent backup at the destination (i.e when files are deleted from the source, the deletion will also go to the destination).

About your application settings … i depends on where they are staying. Normaly they exist as *.conf files on /etc directory (or /opt) and - of course - this needs to be reflected, too - into the migration copy (rsync, what else) process.

but it’s not a good backup tool. :wink:

use a backup tool. not rsync.

look at restic.net / duplicati / duplicity (and additional rclone)

Yep. Rsync ist good foor migrations but less for backups. Not saying you can’t make it good, but chances are high that others, like for example the rclone devs, already did a better job on it :wink:

@kolja
You could put your Nextcloud in a VM. That’s the easiest way to go back to a previous state, in case something goes wrong with an update. You could then take a snapshot of the VM before you do any changes to your setup, and be back to the old state within seconds, in case something goes wrong. Of course, this is not an alternative to proper backups!