Looking at the benchmarks which show much better improvements I am thinking is it possible to replace redis with it? If yes, how to do it in a right way?
Never heard of this before, but I thought why not test this on one of my test intstances.
I only did a simple setup and didn’t do extensive testing because I don’t plan to use this on my production instance just yet. However, it seems to work as a drop-in replacement. Or, at least, Nextcloud did not return an internal server error, and notify_push, which depends on the Redis service, seems to work as well.
Of course, you should probably dig deeper into how to run Pogocache securely — for example with a WebSocket connection and/or password protection — before putting it into production. It’s also a good idea to do some more extensive testing first, to avoid any unforeseen side effects with a 1.0 product, where there’s practically no real-world experience yet with how it behaves together with Nextcloud in a production envirement.
Hi! Many thanks for so prompt reply and mentioning security issues. I will test it as well, though mostly I prefer to use caching via sockets, not networks.
Obviously you shouldn’t chmod 777 it on a production server, but for the sake of a quick test, I didn’t want to mess around with users/groups and permissions.
Oh, nice. Meanwhile, have you found how to limit it to a specific user with a password? As far as I understand, that should be www-data user and the password should be stored in this config file?
But obviously this still isn’t ideal, because every time the container restarts you have to reapply the permissions to /var/run/pogo/pogo-server.sock.
That said, I should mention that I don’t run Nextcloud via Docker, and I’m not a Docker expert either. So I’m not sure what other (and better) ways there might be to expose the socket to a non-root user on the host, or in other containers, and whether that’s even possible without building a custom image. But this is probably something you could ask the Pogocache developer directly… https://github.com/tidwall/pogocache/issues
Many thanks for your nice description. I’ve tested it myself, actually due to the lack of monitoring tools in pogo I cannot compare it with redis. What I understand the project is in the active phase of development and is not so ready for production.
During the test I’ve found that it is possible to use password to protect the caching server. That was not covered well in Nextcloud manuals. So, the question for you, do you use protection redis installations? If yes, then which ones and how?
You can set a password in the redis.conf file. On Debian/Ubuntu based systems it’s located in /etc/redis/:
Change the line…
# requirepass foobared
to
requirepass yoursupersecretpassword
Since Redis 6 they also have an ACL system where you can setup different users with different permissions, but that would be overkill for this usecase, where Redis is running on the same host Nextcloud, imho.
Pogocache implements basic Redis commands like GET, SET, and DEL, and it works for simple caching. However, it does not support advanced Redis features that Nextcloud relies on, such as Lua scripting (EVAL/EVALSHA), file locking, and certain atomic operations (e.g., UNLINK, DECRBY).
As a result, Pogocache cannot fully replace a production Redis server for Nextcloud, because critical features like file locking and distributed mutexes will fail. It can still be useful as a supplementary cache layer (for previews, sessions, or temporary data), but the main Redis instance must remain for core Nextcloud functionality.
My conclussion from this: Forgetting about it for now, and perhaps looking at it again at some point in the future.
Yeah, I’m not going down that rabbit hole, and I’ll just keep using APCu/Redis as recommended in the documentation. Still, looks like an interesting project, and who knows what the future holds.
Also, Valkey (and probably Redict as well) can be used as a drop-in replacement. Personally, I never switched and just kept using the Redis packages from the Debian 12 and Ubuntu 24.04 repos, which, as far as I know, still ship the versions from before they pulled that stunt.
Yes, they were probably afraid that others would eat their lunch, just as MariaDB did with MySQL.