Nextcloud version: 25.0.6
Operating system and version: Snap on Ubuntu 20.04
I recently messed up some files in my Nextcloud folder and decided to try rolling it back to a previous snapshot of that folder (underlying ZFS filesystem).
I shutdown Nextcloud server, rolled back the snapshot, started Nextcloud server, then ran nextcloud.occ files:scan USER. This pulled in all the files (and their timestamps) correctly, but wiped out all the directory timestamps. It seems files:scan will set the directory Modified time to the scan time, which is incorrect. Instead it should use whatever time is reported by the filesystem.
I’ve even tried mounting that data directory read-only. Nextcloud will use its own false “Modified” time that does not reflect any of the true create/change/modify/access times reported by stat.
I’ll probably try to submit this as a bug, but in the short term does anyone have suggestions for working around it and somehow tricking or forcing Nextcloud into using the correct directory modified timestamps? I don’t know how to access the MySQL database since I’m running the containerized Snap.
The oc_filecache table (which I think is the primary file index for Nextcloud?) has two columns for Modified time: mtime (displayed to the user) and storage_mtime (matches the real files). When Nextcloud scans new folders, it correctly scrapes the storage_mtime then incorrectly uses the scan time as the mtime. So I ran the following SQL to correct all of Nextcloud’s timestamps. As far as I can tell, this worked fine and didn’t cause any secondary issues. UPDATE oc_filecache SET mtime = storage_mtime;
(What is the purpose of the storage_mtime column in oc_filecache table?)
It’d be cool to hear from anyone who knows why Nextcloud deliberately tracks and displays a Modified time other than the true time on the file.
Also, this is making me rethink my backup strategy… I assumed Nextcloud was quite robust in keeping its database in sync with the underlying files. But in researching this problem, it seems that files:scan only adds new files, and files:cleanup only removes links between a cache table and file list table. I believe there is no mechanism to bring the Nextcloud database 100% into sync with the underlying filesystem. Please comment below if this is wrong, I’d love to know how to do this.
(Using the occ command — Nextcloud 15 Administration Manual 15 documentation)
Therefore, I might try taking atomic snapshots of the entire Nextcloud server so that I can roll back my files & the database to a known point, since it’s not really possible to resync Nextcloud after rolling back a subset of files alone.
Okay… just in case someone ever stumbles on this in the future - it didn’t “just work”. After correcting the timestamps everything looked fine, but I couldn’t move/edit any of the restored files. I think the database had some leftover metadata about them?
After monkeying with with rebooting, snapshotting, rolling back, running maintenance commands, this is now working. I don’t know the right order, but it is working at least. I suspect these commands were the key:
maintenance:data-fingerprint update the systems data-fingerprint after a backup is restored
maintenance:repair repair this installation`
The repair didn’t find anything alarming, I moreso feel the data-fingerprint was the key (after manually fixing the mtime). But who knows.
All-in-all this has been a bit disappointing. I accidentally deleted a folder and restored it with ZFS in about 5 minutes, then spent the whole weekend trying to coax Nextcloud into recognizing those files correctly. Even now that I’ve found the official backup docs, I don’t think a full backup would help beside rolling back the entire server for one deleted folder.
(Backup — Nextcloud latest Administration Manual latest documentation)
As I’m always telling people… don’t tamper with the data folder unless you really have to. It’s asking for trouble.
You didn’t mention exactly what led to a ZFS rollback (presumably of the entire data folder?), but if possible, it would probably be much cleaner to undelete the files in Nextcloud or re-upload them, even from a backup if necessary.
Rolling back the entire data folder to a snapshot is a sledgehammer that they don’t expect to be used because it would only be an absolute last resort on a production/multi-user system. In that kind of situation, a sysadmin would most likely restore the complete system including the database to a consistent state.
Karl, thanks for the reply! This experience definitely taught me the hard way not to mess with the data folder.
I accidentally deleted a large folder within my user directory. When I tried to undelete with Nextcloud, gave an error (which I think was just the browser timing out). But the undelete touched all the Modified times. At a glance, I thought Nextcloud was restoring by copying files, whereas I’d need them moved back to preserve timestamps.
Re-upload from backup is what I mean about needing to re-think my backup strategy. My backups now are based on being able to backup/restore files server side. I don’t have a good way of serving the backups back to clients so they can re-upload the files. Definitely something I’d like to do better in the future.
In the meantime, I guess I should at least try writing the required backup script to stop Nextcloud, dump the database, and copy the data, because the easy ZFS snapshots won’t cut it here unfortunately.