More of a general question if a remote Redis setup on a VPS makes even sense…
As background: I have a shared web-host on which I run a small Nextcloud instance on, and while it is at times a bit slow, overall I like the reliability and automatic backups that my hoster provides. I also have a VPS in the same data-center that I use for services (like an XMPP server, Etherpad etc.) that I deem less critical.
I used to have the Nextcloud Postgresql database running remotely on that VPS as well, so I know the internal connection between the shared web-host and the VPS is quite fast. But ultimately I decided to move the database back to the one provided by the shared web-host, as I prefer to keep the system simple and not depending on too many critical parts that I have to service manually.
But now to the actual question:
I also have a Redis server running on that VPS and I was wondering if that could be used to speed up the multi-user access to my Nextcloud on the shared web-host? It seems like it is possible to configure an external IP for it, but for a system that is used for fast caching, it might be stupid to do so?
Also: if I configure Nextcloud with Redis, will it fail gracefully if the external VPS is shut down or not accessible?
Thanks for any input on this matter.
In the deployment recommondations they also use external redis servers:
So if the connection is fast, it probably works. I can’t say if it is faster, to a certain amount it depends on how your Nextcloud is used and other things. If it is not too complicated, I’d just give it a try.
Thanks for the answer. I’ll probably give it a try.
But what about it failing gracefully? Will Nextcloud just show an error in the admin page and continue working without Redis if the VPS is shut down?
I don’t know, perhaps it will work after a certain timeout from redis.
After reading the potential security issues related to this ( https://redis.io/topics/security ), I decided to postpone/cancle it.
I can probably firewall off any access from outside the datacenter, but inside only is still a large open door
In general, it is probably the easiest to run a Nextcloud on a single server/vserver. There you can easily add all required features and backups are also easy.
I talked a little about redis in my multi-node cluster. Ultimately you’ll need to IP-restrict access to your redis server to keep it open to your server(s) while closed off to the rest of the world.
Yeah, that was my plan, but due to the specific setup of having my Nextcloud on a shared host, I realized that they are doing some strange routing inside of the data center where outgoing IPs are different from ingoing ones and likely shared. Thus it seemed a bit risky to upen my VPS to that likely shared IP.
However, does Nextcloud support the basic password feature of Redis?
I do believe it does, yes.
I see, that sounds good. I was screening the respective configuration documentation in these two places:
And I am wondering about two things:
- What exactly does the “timeout” parameter do? Am I too optimistic in assuming it would allow the Redis cache to fail gracefully after the specified time?
- How does the “memcache.distributed” fit into the picture? Currently I have ‘memcache.local’ => ‘\OC\Memcache\ArrayCache’, configured on my local shared OVH host (which probably does little), so would it make sense (especially in the scenario of Redis running on a separate server) to add Redis as a distributed cache?
I’ll ping @MorrisJobke for the finer details, perhaps @rullzer too.
When I did distributed redis I used HAProxy I believe.
Nextcloud will not fail gracefully. If you tell it to use redis it will fail if it is not available. Especially since you should use it only for distributed and locking cache. Because if it would then fall back to some other caching mechanism you might end up in an inconsitent state.
However, since you mention you have a small setup. Having a local APCU cache and just use the db as locking.