Numerous Problems with External Storage

External storage has never been especially reliable for me (particularly Google Drive which is just incredibly slow) I’ve always had timeout issues and problems occurring thereafter where NC thinks a file exists but it doesn’t. However I’ve persevered with it and have always managed to accomplish what I want to do.

However ever since the 11.03 update (I think, but I’m now on 12) external storage support has become unusable for me. Almost every time I move files around on external storage I get an error about the file being locked. Sometimes this error will go away if I retry it, other times this ‘lock’ will stay there permanently. What’s strange is this is predominantly happening with an sftp storage location on the same machine (connected via the loopback address) - so it’s not even going over the internet.

This has become extremely problematic as I utilise NC+sftp to sync up a rather large rpm repository between devices and my web server. So far the only way to free these ‘locks’ is to delete the entries in the Nextcloud’s oc_lock table within the database.

Obviously the file locking issue has occurred somewhere in the update process - but I’m sure on a wider level, there must be something configuration-wise that I must have missed to get such poor performance out of external storage; given how Nextcloud has scaled to massive installations for other organisations.

Could anyone suggest some things to check?

My Setup:
Nextcloud 12
Email + Collabora Apps Only
PHP 7.1
-Redis Installed
-Opcache Enabled
Fedora 25

Have you additionally logged this as an issue in GitHub?

/sub as I’m planning a migration of 6TB into external storage…

I have now:

So no response on github -

This problem is getting worse to the point where I’m actually losing data. I experienced a break within the constraints of the oc_file_locks table in the form of duplicates; this led to the sync client deleting the data from my local machine and in turn nextcloud removed it from the external mount.

I don’t know if this has something to do with the length of the filenames I’m dealing with for example 1eb8c0b1371439b5faef2fd5a64558f23f3cb4ddcf93cc3018d2af1cbdbf6278-other.xml.gz ; it’s just a thought.

Something that is bothering me though is that I configured redis, so I don’t get why these file locks are being stored in the central database ? How do I actually check that redis is working in conjunction with NC?

@nickvergessen @MorrisJobke @LukasReschke @tflidd

Anyone help please? The GH issue is above.

Further to my previous comment - this is what I’m talking about:
Fatal webdav Doctrine\DBAL\Exception\UniqueConstraintViolationException: An exception occurred while executing ‘INSERT INTO oc_file_locks (key,lock,ttl) SELECT ?,?,? FROM oc_file_locks WHERE key = ? HAVING COUNT(*) = 0’ with params [“files/2d6462b23d84ead50303eb5e314ae4bd”, -1, 1496360718, “files/2d6462b23d84ead50303eb5e314ae4bd”]: SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry ‘files/2d6462b23d84ead50303eb5e314ae4bd’ for key ‘lock_key_index’ 9 hours ago
Fatal webdav OCA\DAV\Connector\Sabre\Exception\FileLocked: HTTP/1.1 423 “RPM Repository/fc25/SRPMS/repodata” is locked 9 hours ago

Solved by ?

Actually no - that may have solved the problems around duplicates in the database. The file locking still persists.

Also no one has yet answered my question around redis i.e. why the oc_file_locks table is being used for locking when redis is enabled ?

Keep pinging on GH as well, needs a Dev response I think.

Should I just create another Github issue, since the last one was closed down without resolution?

I reopened it, carry on with the updates :slight_smile: