Questions about Multihost Nextcloud Deployment

Hey Folks,

I’ll try to keep this short and sweet.

I’m testing a deployment of Nextcloud with the following configuration :

A(Proxy Server - Round Robin Config) --------- B(Two Ubuntu 16.04 NGINX Webservers) ---------- C(1 Ubuntu 16.04 MariaDB Server )

Everything is up and running. Before I start putting this through it’s paces I want to try and understand something.

Lets say I log in to Nextcloud and the Proxy passes me to WebServer1. As an admin I decide to install an Application. This obviously works perfectly fine.

However the application has installed on the nextcloud web instance installed on WebServer1. How would I ensure that changes like this are made across my two webservers. This deployment I’m currently testing will probably have about 4 webservers by the time it hits the racks. I can’t have a situation where I have to manually update each and every webserver manually because someone installs an app in their account.

I was thinking that regular rsync’s would do the trick but I’m a little unsure if that would be stable.

Does anyone have any thoughts on this. Has anyone deployed a system like this before.

Thanks in advance.


@JasonBayton did run some tests with multiple servers, he shared quite a lot of information which could be interesting for you: Help me test this 3 node cluster

Thanks for that. This will definitely help. I find it interesting that the documentation gives recommendations on multi server configurations for large environments but gives zero guidance.

I’ll try syncthing and see if it works for my needs. If it does I’ll document and add it here for others to follow if you guys are interested.


Enterprise support will give you more guidance. For the free community, it’s up to the users to share their experience.

1 Like

Have you seen this?

Hey Guys,

Thanks for all the help. I managed to get what I wanted working by weighting one of my WebApp Servers heavily and then using an RSYNC script to pull the nextcloud folder on the weighted server every minute.

This will allow me to deploy as many Webapps as I need going forward. I’ve tested this with 25 users and so far no reported issues.

Next step now is to configure a backup DB server and integrate LDAP.

When I’m finished completely I’ll do a write up and post it here for anyone else who may be interested.


I would use glusterfs for that :wink:

Please go on…

I always thought of GlusterFS as a distributed file storage system. Are you suggesting that it can be used as a sort of “File Sync”?

No, he meant, that you could use glusterfs for having all NC node accessing the same storage, although they’re located anywhere in the world… :wink: The issue with that is file locking… NFS would also be an option for the data storage as long as we’re talking local networks.

Glusterfs can operate in many modes. One of the most useful is replicated volume across multiple hosts. And yes, it means that it can synchronize files between those hosts. Advantage of that solution is, that it provides HA, contrary to NFS. File locking is also supported.

Ehem… our NFS storage is HA… no issue with that (ZFS-HA plugin from

Interesting, few questions arise:

  1. Is it me or domain is unregistered and for sale?
  2. What kind of HA it is? Pacemaker + VIP + shared drive or DBRD, or something else?
  3. You mean ZFS on Linux or Solaris?
  4. Is ZFS still being developed, since Oracle abandoned Solaris?

Ahh…my bad, its actually

  2. RSF-1 is pretty much like Pacemaker, VIP and shared drive, as RSF-1 operates on zpools, but can of course use any other OS service as its base
  3. I am running ZFS mostly on Illumos based distros like OpenIndiana or omniOS
  4. OpenZFS is a very active community, which still pushes ZFS onwards - it’s the source where all *nix-based OSes draw their ZFS from. Also, Oracle is also still developing it’s branch of ZFS

Compared to pacemaker/cronosync/DRDB, I found RSF-1 easier to setup, reasonably prized and the guys at HA are very forthcoming and helpful, if you ever encounter any issue with your setup. This whole setup is driving our OracleVM cluster with 12 VM servers and 150+ guests since two years, with no issues.