I’ve had quite a few issues with locking for some unknown reason.
I have a hosted nextcloud server. I constantly have to get my provider to remove locks on files. Many times thousands of files were locked.
Now, my provider has disabled locks.
Is this a good idea?
I have a few users working on same documents, but not that many.
What are these locks? https://docs.nextcloud.com/server/12/admin_manual/configuration_files/files_locking_transactional.html
Are the locking stuff up forever until server admin intervention?
It’s all a bit confusing to me.
For good reason transactional file locking is enabled by default, otherwise two people would be able to “safe” the same file at the same time, causing file corruption.
I read quite often about problems with locked staying files, when the traditional way via database is used. No idea why, maybe database overload or something. The locks table should be cleaned every 30 minutes by the way by cron job (if system cron is enabled server side), so no manual intervention by admin should be necessary.
If you read on the artical within your link, using redis-server (memcache) for file locking is the preferred more performant method. You could ask your admin to set up redis/make it usable by your Nextcloud. If on single server systems a unix socket is used, redis.conf needs to be adjusted a bid to allow access by webserver user, just read some guide around about this.
I don’t understand this logic or these locks. Cleaned every 30 minutes by a cron job? That means that the file locks could be removed 10 seconds after they are locked, if done by a cron job, which has no knowledge of when the lock was created.
Also, why is it not like you get a lock for 10 minutes and as long as you are online and working on the file, it’s locked. Once you disappear and the server doesn’t see you, the lock is unlocked after 10 minutes. On the client side, it would tell you that a lock is no longer possible without reconnecting.
As far as I understood, this locks should be only active as long as the “saving” process of the files take, so just something in seconds dimension. Because it is by default done via database, it is not too efficient and the unlocking can fail due to database overload and stuff. Excuse me, if the technical details might be explained rough/wrong .
The cron jobs task will be only to release those not sucessful unlocked files. In older versions of Nextcloud this was not done and you can find several topics here about permanent locked files and guides how to unlock those manually via database. Now the cronjob resolved that regularly.
To use redis for file locking via memory caching enhances performance of this task signifikantly and takes it/the load from database. Thus this missed releases should not occur anymore and the cron job, which does jusk scan database, is obsolete.
Hope I explained everything right enough .
A late reply for those interested. Your provider might use object storage through rest API. This can give file locking conflicts. You should let the object store handle the file locks.
So you mean that, if we are using ObjectStorage as our primary storage, we should totally disable file locking? Any docs mentioning this?
'filelocking.enabled' => false
I’m not sure about documentation, since I’m no user of Object Storage myself, but I have researched Object Storage and it is to my understanding that since Object Storage doesn’t use any path (since it uses a database to keep track of its “objects”), this will give conflict with filelocking in Nextcloud since NC does allow directory tree’s. Also I’ve read it’s also better for performance to let that be handled on the storage side if possible, which should be the case with an Object Storage. Your storage would be more aware of whatever files need access at whatever time then NC since that’s only one application.
I found this an interesting read:
This person is stating that yes you should use false!