Master/master replicated setup

I’ve been mulling the idea of replicating my home server to a dedicated server in a datacentre for when I’m away (and the home server is shut down) or I happen to lose the NC server for whatever reason.

I’ve seen some examples using GlusterFS or DRBD, and a master/slave DB relationship, but I’m looking at the feasibility of a slightly different approach:

As it stands currently, besides a 2nd webserver my home system is setup as shown above. Data is stored on the container host and passed through to the container running NC. The database is also in its own container. The NC server is accessible via reverse proxy.

I’ve been using Resilio/BTsync since its release and feel it’s a good, solid sync engine. Using this to keep the data folders in sync across two systems via bidirectional sync appears to be a good way to go as it’s real-time and immediate. I’m not considering external storage right now as I think btsync is fast enough that it’d avoid database issues.

Now, could someone tell me why this wouldn’t work please? If Gluster/etc is a hard requirement I’d like to know why :slight_smile:

@tflidd @LukasReschke @nickvergessen @MorrisJobke @bjoern

1 Like

Bumping this back up as I’d like to get some opinions :slight_smile:

I have no experience with such setups. I would have doubts about the btsync, how does it handle conflicts because it doesn’t interact with the database. What happens if the connection is interrupted and you have changes on both systems? Does it automatically recover?

It should pick up from the newest file based on metadata, so I’d assume it’d be OK. Would have plenty of testing to undertake before going live though. The biggest issue I see with resilio is them not opening up their source. It’s otherwise been a good piece of software over the years – it isn’t the only option though naturally.

Some automated testing would be great, just to get some cases tested.

1 Like