OCC files:cleanup - Does it delete the db table entries of the missing files?

Hello

Recently we ran into a problem after deleting some files that they still appear in the web interface, and also they block the desktop client sync process causing it to be kind of useless. We found out that the files are actually deleted from the server but the corresponding entries in oc_filecache table are still there. We solved the problem by simply deleting those entries from the table.

My question is whether there is a tool/command to delete the oc_filecache entries that has no related files on the server.
I read about the occ files:cleanup command but looks like it does the opposite [1]:

files:cleanup tidies up the server’s file cache by deleting all file entries that have no matching entries in the storage table.

Any ideas?

Regards
Shak
[1] https://doc.owncloud.org/server/9.0/admin_manual/configuration_server/occ_command.html#file-operations

It does exactly what you need, removing the files entries in oc_filecache if the actual file is not there.

On top this command should be executed regularly by cron.php. How long after files removed did you still see them in the files app?

1 Like

In this case, the explanation in the docs is confusing especially the “in the storage table” part!

We noticed the problem immediately but it stayed there for while (sure more than the cronjob’s 15 minutes) before we solved it. However, we may had problem in our cron itself, so that is still a possible reason.
About the files:cleanup, we will try it in case the problem came back again. Thanks!

Yeah, I also remember stumbling about “table” there. Maybe because in the end everything is a table :smile:.

About cron: As far as I know some tasks of cron.php are also not done every cron execution. Every 15 minutes cleanup and therefore whole filecache scan would bring more performance loss than gain from my point of view.
In web ui admin panel you could/should check if cron is run regularly.

1 Like

Sorry that I have to say that, but the accepted answer is simply wrong.
The command does exactly what it says: It checks the oc_storage table and removes all entries from oc_filecache that have no valid storage assigned.
Checking with most recent version (18.0.3) of NextCloud, we had plenty of non-existing local storage entries that were still marked active. (We migrated our instance from an external web server to a Synology NAS, then later to the nextcloud-fpm docker environment.)
Only after removing those, the cleanup command started to do anything. It appears that the file system is not checked at all when running the cleanup command. We still have tons of outdated items in the oc_filecache table, this seems to be accepted by devs.

2 Likes

Many thanks for correcting. Yes this indeed makes sense and explains the “in the storage table” part that we didn’t understand correctly.

Indeed it is not really a helpful cleanup since I observe obsolete oc_storage entries being left even that e.g. related external storages have been removed and it is especially an issue when admins move the data dir and do not manually replace/remove the old local data dir entry. Basically it would be great to have:

  1. Regularly sync all oc_storage entries with actually present users + external storages + config.txt data dir entry, and/or fix reasons for having obsolete oc_storage entries right from the root.
  2. Have an additional occ command to do a real filesystem sync for all oc_filecache entries, in case certain files have been added/removed/changed manually, were lost due to corruption or other reasons. I guess it is too heavy for a regular cron job, but to have this option for manual cleanup would be great.
2 Likes

I absolutely agree with you, except for one thing:
Automatic cleanup, especially for storage entries, must be limited to local::* and home::* storages only. Any mounted external storage might encounter temporary unavailability (network issues etc.), which might destroy valid entries.
However, for all home:: storages and external storage filesystems, I would expect the storage filesystem implementation to take care of removed mounts. Sadly, the current approach (at least for all local mounts I have seen) for the storage table seems to be “insert only, never delete, keep the history complete”, which doesn’t really make sense for classical relational database systems. ^^

I don’t mean to remove external storages from database, if they are not present, but if they have been removed by user, or, the related storage app has been removed. I remember that I found two external storage entries in my oc_storage even that I definitely removed those the regular way and disabled + removed the related app. In such case, as of your info, all oc_filecache entries for those will survive as well.

I remember one issue on GitHub where in an office, a single local storage/volume was added as external storage to all colleagues. Now the database has one relatedoc_storage entry for user, and an every file has a dedicated oc_filecache entry for every user as well. The result was a certain tenth of GiB large oc_filecache table, server/database overload, leading to database and finally file system corruption. Now I can only imagine what happens if the central storage/volume is changed, again doubling all database entries since the old entries are not removed… unacceptable situation IMO.

2 Likes