File is locked - how to unlock

Thank you for this useful post!

Itā€™s now nearly 6 years later, yet here we are, the bug still persists, this is ridiculousā€¦ Just give us a gui option for admins to unlock the files

5 Likes

If it helps, I ā€œsolvedā€ this problem as follows: I can't rename files...(NĆ£o consigo renomear arquivos) Help me please!

To the point!
Thank you.

This advice is very outdated and should probably just be deleted. I cause this issue by ^C exiting a file scan, and was then able to fix it quite simply by running occ maintenance:repair.

A post was split to a new topic: File is locked error in hosted environments

I have had this file Lock issue over some 6 years now very internmitanty, I have been just logging in as root and remove the folder or file from the nextcloud data then it disappears from the web interface/database. Not sure if it does harm to the database?

I started out testing using nextcloud with Lamp stack in 2017 Version 11 or 12 ?

It happens a lot less now but it did on the older hardware like I was using.

Nextcloud 12
Pentium 4 2.80 GHz, released from 2002
512 MB ram
160 GB IDE, my friend still has the HDD as backup now because I did a rescue and upgrade to NC 24 after the power supply stopped last year.

Pentium 4 3.0 GHz, released from 2003
1024 MB ram
1 TB SATA

Raspberry Pi 3B
Running on 2 x 4 TB SATA with external USB to SATA 2 powered dock. I tested cron backups every day by just with a copy of the data folder to another drive.

Currently using a few i5-4670 CPU @ 3.40GHz, released in 2013
I recycle old computers and the rubbish tip has good lately.

The problem of the locked files do have several issues (both, open ones and also closed by inactivity) and should definitely be fixed soon. It occurs on my Nextcloud version 24.0.9 especially on image files (.jpg) but only on about 10 files out of 50k. I got them unlocked again now and I keep an eye on it if on the next sync of a bigger amount of files.
It may be related to following issue, if uploads do take longer than an hour (3600 seconds) OCA\DAV\Connector\Sabre\Exception\FileLocked - Redis race condition Ā· Issue #9001 Ā· nextcloud/server Ā· GitHub

1 Like

Thought it would be easier to use the docke compose script on the nextcloud docker hub page.
(Using Base version - apache)
However that end up right into this pitfall.

Put in 12000 files about 22GB, now lots of files got 423 locked, the windows client has retried sync loop for two days without a result.

Now it seems I would have to go into the docker image, edit config files, edit the database etc etc. Which is rather a complicated case for a new start.

If this error is very likely to happen, maybe the docker hub page should mention this, for example, if you are going to sync a large number of files, consider using a memcahce/redis for Transactional file locking. Or maybe the base docker image should provide either memcache or redis either within the nextcloud docker image itself as an option for the user, or within the compose script as a seperate docker image.

I have zero idea how this happened, but this comment solved it and seems like it should be pinned, and ammended to expound slightly for Docker users ā€“

docker exec -ti <containername> /var/www/html/occ maintenance:repair

Apparently the occ tool is invaluable for CLI work, and has many functions I did not know about.

EDIT: Also if you still need to delete from the oc_file_locks table table, a few things to note;

  • One, DELETE FROM <dbname>.<dbtableprefix>file_locks WHERE 1; is the correct format. The one listed at the top comment is outright wrong for default configurations and modified configurations which do not match that userā€™s configuration. Itā€™s also missing the semicolon, but most users will catch that.
  • dbname, dbtableprefix come from config.php ā€“ on my system dbtableprefix defaults to oc_ and dbname was ncloud. So, for me, DELETE FROM ncloud.oc_file_locks WHERE 1; was the command to use.
  • This is also valid/relevant for using phpmyadmin/etc ā€“ the default naming does not always match these claims

Docker users; MySQL CLI

docker exec -ti <dbcontainername> mysql -u <dbuser> -p <dbpassword>
  • dbcontainername is your containerā€™s name, dbuser and dbpassword come from config.php

Docker users; MySQL command to delete non-interactively

docker exec -ti <dbcontainername> mysql -u <dbuser> -p <dbpassword> DELETE FROM <dbname>.<dbtableprefix>file_locks WHERE 1;
1 Like

What (seems to have) worked for me:

  • Disable file locking as mentioned above
  • sync files from your client
  • Re-enable file locking

I donā€™t know what caused the problem in the first place, but my problems are gone for now.

Installed redis, problem persists. Unable to delete Google Drive folder, even with the Migration addon disabled. Nextcloud hasnā€™t fixed basic syncing since 2016.

1 Like

Wowā€¦ this post popped up in unread and Iā€™m currently battling with file lock errors! Iā€™ve got other better things to be doing on a Sunday and reading lots of posts about what should be a pretty basic function.

I get these errors every now and then. What are the potential consequences of disabling file locking in the config?

Iā€™ve found this documentation about Redis - Configuring memory caching ā€” Nextcloud 13 Administration Manual 13 documentation - but itā€™s not clear how I can add the PHP module when I use a Docker image (packaged application on TrueNAS).

EDIT: FWIW if you have trouble logging on to the database (like I did) the following worked for me:
in config.php, add a line below 'maintenance' => false, containing the following:

'filelocking.enabled' => false,

Then restart Nextcloud, trigger a sync from the client. Then it works.
Then like the coward I am, I reverted it back by removing the option again and starting Nextcloud again :slight_smile: Probably safer

@ fa2k I have done like you say, and it solve the problem with Joplin.

But when I remove 'filelocking.enabled' => false, the problem come backā€¦

What can I do? The problem exist since several years, nowā€¦

@neccloud You can make sure to sync all clients while locking is disabled, but other than that, I donā€™t know. Maybe move it all out of Nextcloud (sync) and back in. Seems you mat get locking errors on new files created by Joplin, then just have to hope it will be fixed.

Hey @jonathanmmm
Thanks for the answer. It really worked !!! Just want to add some points.

Go to the nextcloud folder (Ex. /var/www/nextcloud) where the **occ** exists.
1. sudo -u www-data php occ maintenance:mode --on
2. sudo -u www-data redis-cli --askpass -s /var/run/redis/redis.sock flushall
Password can be found in nextcloud_folder/config/config.php
3. sudo -u www-data php occ maintenance:mode --off
6. sudo -u www-data php occ files:scan --all
This scans for all files within the specific user
7. sudo -u www-data php occ files:scan-app-data
This scans for all the shared folder or files that has been shared

Thanks for clarifying that. That was helpful.

you have givine me the way to fix it. But on nextcloud 27 database table names have changed.

on nextcloud 27

  1. DELETE all the rows from oc_files_lock

or directly from SSH
DELETE FROM oc_files_lock

will solve the problem. I have also flushall the redis with this command
Notw: I am using UNIX socks thats why be careful witht he syjntax how you connect to your redis

ssh commands
#connect to redis
redis-cli -s /var/run/redis/redis-server.sock
#flush the redis
flushall

for the begginers, which I was once

  1. open ssh and connect nextcloud
  2. sudo bash

Empty table oc_files_lock:

  1. mariadb -u username_of_database -p Database_name
  2. enter database password
  3. DELETE FROM oc_files_lock;
  4. ctrl+c or type quit to exit

Because in my scenario we are using redis WITH UNIX SOCKET

  1. redis-cli -s /var/run/redis/redis-server.sock
  2. flushall
  3. ctrl+c or type quit to exit

Finished

1 Like