AWS s3 as primary storage & redundancy

I have now configured my Nextcloud instance with AWS-S3 storage.

Everything works fine, but upon investigating the inner workings of this setup I discovered that the S3-bucket is filled with only the primary folder filed with a huge amount of “urn:oid:[0-9]+” files.

My thoughts went to the “Achilles heal” of this way of working…

I have read recently that it is wise to not only back up the database regularly (which I do by default) but also to replicate the database to use a secondary location for “just in case” (replication). I think this is wise, and I’m going to set that up also…

With the S3 bucket there is no redundancy what so ever as to who owns the particular file, what is the file name etc…

Why!?
I mean, S3 supports folders and file names… why isn’t the folder name and file name structure identical to the “default” way of working with a “disk folder” as primary storage? That would give much more redundancy “in case of disaster”!

Within the “default disks folder structure” there are folders for each user, a folder for versions, subfolders, file names etc! One can simply re scan and a “database hick-up” is fixed? With S3 storage I see a “total disaster” if the table “oc_filecache” becomes corrupted somehow… one ends up with a huge amount of “weirdly named files” and no idea what their names are, in which (sub) folder they should reside, not even knowing who the owner of the file is…

1 Like

There isn’t an “S3 developer” here that could give some insight into this?

Sorry i do not use S3. But i think this thread maybe interesting. It is a discussion for S3 to local storage and the question with the name of the files. Maybe it helps you. If you solve the problem in the thread you can also solve your problem.

Whereas I always thought that S3 clouds were invented to never get the data back. :wink: Also, no backups are necessary there, right? :wink: