Memcache.locking with APCu

‘filelocking.enabled’ => true,
‘memcache.locking’ => ‘\OC\Memcache\APCu’,

Does it make sense to use APCu for filelocking? Documentation only mentions redis for that purpose.

use both
‘memcache.local’ => ‘\OC\Memcache\APCu’,
‘memcache.locking’ => ‘\OC\Memcache\Redis’,
‘filelocking.enabled’ => ‘true’,
‘redis’ =>
array (
‘host’ => ‘/var/run/redis/redis.sock’,
‘port’ => 0,
‘timeout’ => 0.0,
),

Yes that is mentioned in the documentation. But it also says

Additional notes for Redis vs. APCu on memory caching

APCu is faster at local caching than Redis. If you have enough memory, use APCu for Memory Caching and Redis for File Locking. If you are low on memory, use Redis for both.

Why not use APCu for both? Is it slower than Redis for Locking?

to be honest, if your nextcloud server is a soho or less than 10 users at same time, the cache option is not very important. were talking ms.

Also, it depends on your hardware. the more ram you get, the better it will be serve by redis.

Also, people think redis is just a cache, but it is much more !

Redis is an open-source in-memory data structure store used as a database, cache, and message broker. It supports data structures such as strings, hashes, lists, sets, and sorted sets with range queries, bitmaps, hyperloglogs, and geospatial indexes with radius queries. ( what a nice cut and past , yeah !! )

The advantage of redis are mainly:

  • Allowing storing key and value pairs as large as 512 MB. You can have huge keys and values of objects as big as 512 MB, which means that Redis will support up to 1GB of data for a single entry. ( need ram at least twice the size your caching )
  • Redis offers data replication. The slave nodes always listen to the master node, which means that when the master node is updated, slaves will automatically be updated, as well. Redis can also update slaves asynchronously.
  • Redis allows inserting huge amounts of data into its cache very easily. Sometimes, it is required to load millions of pieces of data into the cache within a short period of time. This can be done easily using mass insertion, a feature supported by Redis.

Thanks for the explanation. I will stick with redis.