Iāve been mulling this over for a little while, so I figured it wouldnāt hurt to gain the opinion of the NC community before I make any drastic changes.
Prior to NC Iāve spent time with OC, Pydio, BTSync (Resilio), SyncThing, Seafile and likely others, too. None have reassured me to the point Iād consider doing anything with my actual ācriticalā data, and ultimately after dabbling with a few hundred gigs generated withing the solution, Iād move all the data generated or imported from other places out of the platform and back onto my flat-file ZFS array for another day.
NC, after months of testing, huge improvements to the Android app (h/t @Andy@mario, etc after only needing to clean up several thousand duplicated files a few times ) and some experimenting on the underlying infrastructure Iām running it on, now feels solid enough that Iām gaining the confidence needed to bring in the 6TB I havenāt before.
The question is, given thereās a good deal of churn on the data due to the several other servers I run that make changes on a file-system level to many directories, do I bring it in as external storage, or is there a reason I shouldnāt?
Your thoughts, if you please.
(All data is backed up regardless, but I donāt want to have to restore if I can help it generally)
So your question is, should the data be on the local filesystem, or should it be mounted as external storage? Where would be data be if it was mounted as external storage?
Ah no, the question is do I load it as local storage or external - the data remains in the same place on the container host, but how itās presented to NC results in a vastly different experience.
Personally, I would prefer to mount it as local storage, but the problem is how you import it to the database. The best solution may be to mount each folder as external storage, but use the ālocalā option when mounting it using the External Storage app. That way you get round the problem of having to import it into Nextcloud, and can easily mount/unmount it whenever you need to move it (for example).
Yes exactly, no need to mount it as anything other than local in external because itās all there and available. I noticed some users complaining about issues with external storage so Iām just hesitant over doing it if moving it all to data would be more reliable
yeah. If you donāt plan on making changes from the āoutsideā Iād use local storage as that is better from a performance and reliability pov. Note that when you use external storage Nc only scans it when you visit with the web interface. If you donāt that can be problematic if you frequently make changes outside of Nextcloud.
But external storage gives you the flexibility @terry_tibbles noted.
Thatāll require a bit of re-jigging on my system to isolate the datafolder for snapshots. Iāll see what I can do
What you said about external storage - why is the traditional data folder more reliable, and why will not accessing NC for a while be problematic if Iād only need access to it via the web when I ⦠access it via the web?
In that case it makes NO difference at all. It is simply that if you use the client exclusively you risk having external storage not getting updated often enough to catch all changes. But if you use the web interface, all is good.
Also, this might have changed these days - @icewind can probably say if we periodically check external storage irrespective of how it is accessed or however else it works these days.
I really thought that was already the case for the longest time, disappointed when I found out it wasnāt.
Iām coming back to your point about /data/ being more stable though. Aside from some issues Iāve read about:
Are there any stoppers? My data is backed up regardless, but Iām going to be a pretty grumpy Bayton if stuff goes wrong for no real reason.
The thing there is explicitly about things like Google Drive and other external storages (Dropboxā¦) that rely on another server being reliable and fast. If youāre talking about a local drive or even NFS it makes no difference.
Happy days. I think weāre onto a plan in that case.
One more - I can mount the datasets under an authād user outside of www-data so thatās fine. When I get lazy and opt to move stuff about from the web interface/webDAV rather than traditional commandline/SFTP/NFS/Samba, is it going to be any less reliable in terms of data continuity/integrity?
well for big files thats possibly risky, as thereās a time-out on PHP - if an operation takes more than that it gets killed half way. Not sure how problematic that is in reality as this is done client side in javascript and it might be done in small batches or something. A question for @icewind