Mirroring nextcloud

Greetings,
apologize if I am wring to the wrong department. Please, suggest where to post if needed.
I would like to create a second Nextcloud server (I am now running it on a raspberry pi) and have an exact copy of it. A clone.
My goal would be that:
A) if Nextclout 1 is corrupted or non functioning for what ever reason, all I have to do is redirect the ineternet (web) request to the Nextcloud2.
B) if A is not possible, I would like to be ab;e to rebuild Nextcloud A with an option that can pull all data and configuration from NextCloud B (that of course was as Nextcloud A before a hardware failure).

Is it possible?
Would that be a feature request?
Again, please apologize if I am writing to the wrong place. Appreciate if you can refer me to the correct area if this is not.
Kattivius

I agree that this is super important! It does not exist for us basic users at this time. :frowning:
Your current best bet is to use Rclone to mirror your webdav data directory to another machine via rsync. Borg Backup has been scripted by some to help automate this process.

  1. Request to optionally mirror federated data between instances as a remotely federated share will disappear from your instance if it goes offline. There needs to be an option for these shares to be mirrored in the webui since they already will be downloaded on any clients federating them…

  2. IPFS, or InterPlanetaryFileSystem for Javascript, as external storage would allow Nextcloud users to mirror their data across multiple users, similar to seeding torrent files. See the external storage request here; there is a javascript implementation that could be adapted. It has been mentioned previously on GSOC 2017 and here discussing P2P file systems.

  3. Request to support the Zot protocol, which is developed in PHP for Hubzilla and allows users to literally mirror and migrate their accounts across infinite trusted servers. This process is called Nomadic Identity and would completely reshape our use of Nextcloud in the best way possible! A dev from that project has even discussed the possibility on this forum before!
    HubZilla, an interesting selfhosted social network companion to your Nextcloud

1 Like

Worth noting also that OpenCloudMesh (OCM) was announced by devs of the Nextcloud project in 2016 and is documented here. It does not specifically address mirroring data as discussed in the previous post, but it does relate to finding common federation protocols for all the various, enterprise file sharing projects like Nextcloud, Seafile, Filerun, etc.

Project github is located here, but has not been updated since mid-2017. However, Major federation improvements were added to the server in late 2018 via this Pull Request.

Back in Nextcloud 12 it was announced that Global Scale architecture would be deployed to help scale up for large user case scenarios, but should be deployable on a minimum of two machines. This is designed for enterprise use, but it would address the kinds of questions you have for balancing between multiple servers. Not all of it has been implemented or documented so it is still more of a conceptual white paper.

The Global Site Selector app is available in the app store for setting a Master/Slave relationship between multiple servers.
Lookup Server is being developed as well, but I think it needs additional documentation.
Balancer part of Global Scale does not yet exist, or is not yet being developed as part of Nextcloud’s overall Github.

Taken from this old forum post, these three things accomplish the following:

Global Site Selector

The Global Site Selector (GSS) acts as a central instance that is accessed by the user during the first login, accessing it via the Web, WebDAV or REST. The GSS authenticates the user via the central user management like for example LDAP. It then looks up the node where the user is located in the lookup server and redirects the user to the right hostname. The following calls during the same session are done directly from client to the node.

Lookup Server

The lookup server stores the physical location of a user. It can be queried using a valid user id to fetch the federated sharing id of a user. In some situations, it is important to limit queries to a certain IP space to avoid data leaks. It also keeps track of old federated sharing IDs. The lookup server stores additional data of the users like for example the required QoS metrics like storage/quota settings, speed class, reliability class and so on.

Balancer

The Balancer runs on a dedicated machine, monitoring the various nodes and their storage, CPU, RAM and network utilization. It can mark nodes as online or offline and initiate the migration of user accounts to different nodes based on data in the Lookup Server like business or legal requirements, QoS settings or user location. If for example, a user would move from the US to Europe, the Balancer would initiate a migration from their data to an EU data center to improve the quality of service.

You can learn more details on our webpage about Global Scale.

@Jammer It would be awesome to allow a location to exist in multiple locations on the server. See this requesting for optionally storing federated directories across servers rather than just clients.

Yeah, federated mirroring would be neat.

However for the originally quoted idea one could use this https://github.com/trapexit/mergerfs

1 Like

Hi,

I am doing something like that for my Nextcloud…

FreeNAS is the backend holding all the data.
My FreeNAS No1 replicates its content to FreeNAS No2 every 15 minutes, sending the ZFS snapshots over VPN.
FreeNAS No2 being 400 Km away, it will survive any physical damage FreeNAS No1 can suffer.
FreeNAS No3 is on the same site as FreeNAS No1 and is kept down most of the time. Twice a month, I power it up and it syncs with FreeNAS No1 right away. Once sync is done, I turn it back off. That way, no logical incident can affect FreeNAS No3 because it is offline all the time.

Thanks to that mechanic, all the data are very safe.

In a Docker host outside FreeNAS, I mount a few NFS shares. One is the data used by the Nextcloud container. Another is mounted from a standalone VM I use for backup. That standalone VM has read-only access to all Docker volumes mounted by the Nextcloud container. It does its backup this way. It also has MySQL client installed on it and do a complete SQLDump of the database. These files are then encrypted and saved back in FreeNAS to be replicated to all 3 servers.

Thanks to that, every FreeNAS contains 100% of the server-side encrypted data, 100% of manually encrypted Nextcloud Docker volumes and 100% of the manually encrypted SQL Database. Because everything is encrypted, there are no risk if a FreeNAS get compromised or stolen.

To recover, I need to a Docker host with 3 containers :
–First is used to extract and decrypt the backups. I must enter the backup password manually for that.
–Second is an SQL Database to which I restore the complete DB
–Third is the Nextcloud container in which I mount the freshly restored and decrypted volumes.

And Bingo : a brand new Nextcloud server is running with data up-to-date.

Every year, I do a complete restore test to ensure everything recover correctly and that my cloned Nextcloud is functional. Should I fail my annual test, I fix it and re-test it within 6 months. I also have everything written in a document, listing which command should I do, in which order, …

Thanks to that, my private cloud is as strong and safe as possible. The service may go down for a moment, but data will always be recovered.

2 Likes