As long as you backup the database it’s fine. You can also use a HA database server so changes are synced between both servers instantly.
S3 is significantly better as far as performance goes as you’re removing the filesystem from the equation. This becomes even more apparent on fuse based file systems, which are the go-to for any kind of cluster storage.
Parsing the database is still not ideal. I’m going to take a look at the source code and see how nextcloud handles removing previews when a user is deleted from the system.
Because if that code doesn’t exist then nextcloud is not a production ready system as far as I’m concerned.
Edit: To expand on the above, it’s because a single user can use over 100GB+ in image previews depending on your servers preview settings. Some obnoxious users might even rack up a 1+ TB depending on how many images they have if you allow a lot of storage per user. If those previews aren’t deleted when a user is removed from the system… well, on a highly active nextcloud server - say Hetzner’s - you’re looking at a lot of data being left behind along with a ballooned mysql database.
Allow me to explain the current solution for this problem. You get to go through the oc_filecache table and parse every filename then you have to take every preview and make sure you don’t remove the theme images. Once you do that you have to add it to a script that goes into your S3 bucket and manually removes it.
Since each urn: object is unique you also have to parse that.