I’ve been mulling this over for a little while, so I figured it wouldn’t hurt to gain the opinion of the NC community before I make any drastic changes.
Prior to NC I’ve spent time with OC, Pydio, BTSync (Resilio), SyncThing, Seafile and likely others, too. None have reassured me to the point I’d consider doing anything with my actual “critical” data, and ultimately after dabbling with a few hundred gigs generated withing the solution, I’d move all the data generated or imported from other places out of the platform and back onto my flat-file ZFS array for another day.
NC, after months of testing, huge improvements to the Android app (h/t @Andy@mario, etc after only needing to clean up several thousand duplicated files a few times ) and some experimenting on the underlying infrastructure I’m running it on, now feels solid enough that I’m gaining the confidence needed to bring in the 6TB I haven’t before.
The question is, given there’s a good deal of churn on the data due to the several other servers I run that make changes on a file-system level to many directories, do I bring it in as external storage, or is there a reason I shouldn’t?
Your thoughts, if you please.
(All data is backed up regardless, but I don’t want to have to restore if I can help it generally)
Personally, I would prefer to mount it as local storage, but the problem is how you import it to the database. The best solution may be to mount each folder as external storage, but use the ‘local’ option when mounting it using the External Storage app. That way you get round the problem of having to import it into Nextcloud, and can easily mount/unmount it whenever you need to move it (for example).
Yes exactly, no need to mount it as anything other than local in external because it’s all there and available. I noticed some users complaining about issues with external storage so I’m just hesitant over doing it if moving it all to data would be more reliable
yeah. If you don’t plan on making changes from the ‘outside’ I’d use local storage as that is better from a performance and reliability pov. Note that when you use external storage Nc only scans it when you visit with the web interface. If you don’t that can be problematic if you frequently make changes outside of Nextcloud.
That’ll require a bit of re-jigging on my system to isolate the datafolder for snapshots. I’ll see what I can do
What you said about external storage - why is the traditional data folder more reliable, and why will not accessing NC for a while be problematic if I’d only need access to it via the web when I … access it via the web?
In that case it makes NO difference at all. It is simply that if you use the client exclusively you risk having external storage not getting updated often enough to catch all changes. But if you use the web interface, all is good.
Also, this might have changed these days - @icewind can probably say if we periodically check external storage irrespective of how it is accessed or however else it works these days.
I really thought that was already the case for the longest time, disappointed when I found out it wasn’t.
I’m coming back to your point about /data/ being more stable though. Aside from some issues I’ve read about:
Are there any stoppers? My data is backed up regardless, but I’m going to be a pretty grumpy Bayton if stuff goes wrong for no real reason.
The thing there is explicitly about things like Google Drive and other external storages (Dropbox…) that rely on another server being reliable and fast. If you’re talking about a local drive or even NFS it makes no difference.
Happy days. I think we’re onto a plan in that case.
One more - I can mount the datasets under an auth’d user outside of www-data so that’s fine. When I get lazy and opt to move stuff about from the web interface/webDAV rather than traditional commandline/SFTP/NFS/Samba, is it going to be any less reliable in terms of data continuity/integrity?