I have reached the max available space of my external storage on my ncp installation.
I have 2 questions related to it. So I have a choice how to solve this situation:
1) what is the procedure to replace the HD for a bigger one while maintaining my configuration (no to reinstall everthing) 2) how can I add a second HD to NC and have as it was a single storage device? (basically, the idea is to have more available storage using a second disk but showing only extra space)
If that is not possible, how I simply add extra HD to NC?
Is there an official procedure ?
and thanks for your help
As for the second storage disk, NC doesn’t support multiple data folders to my knowledge. And as so said, pooling them without redundancy is asking for data loss. RAID is the proper method for doing this.
I thought to recall there was a possibility to add esternal storage like google drive… I need to refresh my memory but I think with same technique, is possible to add a second HD.
That said, surely is necessary to back up all data on a different separate device in case of data loss.
I think NC will sooner or later go to a more scalable configuration.
I think I am not the only one getting to fill up the external storage!
Dont you think so?
Thanks for the suggestions @KarlF12
It’s not so much a question of scalability as it is a technical issue. Any system faces the same issue when trying to add a disk to backend storage. It just has to do with how disks have to be formatted and mounted.
Even if NC added the ability to have multiple data folders and assign each user to one, you’d still have the same problem, a single failure would result in data loss and a server down.
I think another possibility is to format the new HDD with the same file system and copy all files from the old HDD to new HDD (same file-structure). Then mount the new HDD like the old HDD. It think it works.
I understand and agree.
Yet, I hope for a future solution with easier scalability that will not impact any system with any issues… or any other sort of solution that will eventually allow a a system to grow easier.
I understand your point.
Today we have to deal with what we have.
I love NC… will look try the solution you suggested.
Hi @devnull, thanks for fill in.
Thats seems to be a similar solution…
imaging and restoring to a bigger disk or copying data, format on same fs and copy back data…
will see whats the easier among the 2 solutions.
I will report here and hopefully mark this as a solved issue/situation
There are strategies for this, for example you can use LVM which allows you to easily add in new storage and extend existing partitions across new disks. But, even with that, you still need to have a reliable storage backend. In other words, at least in a production environment, you would add new arrays of redundant disks, not individual disks.
Actually I have a side question… maybe very important. Q: What is the official procedure to change external storage to a bigger one?
I just ordered a 4TB WD red for NAS.
Making an image of my 1T disk will imply a spare disk to image it and so restore it.
Copying the contents from one disk to the new one, would spare me a spare disk for imaging (probably preferred way).
I cant find a documentation for official documented procedure to migrate/change external storage.
Thanks guys for your help.
I had started a new post as I needed to know if there was an official supported procedure for storage change/switch.
between your suggestions and the other post, finally I got the result I wanted.
basically, a copy of the structure to new HD is the final solution.
Few consideration before doing that are needed:
A simple clone would not work as the new disk is bigger than 2TB. MBR dont work.
So you need to create a partition table on GPT.
Than copying files is longer and more risky. A clone disk-to-disk is better.
Finally Clonezilla with a small change in advanced option is the final solution.
Details I have written in my other post: https://help.nextcloud.com/t/offial-procedure-to-substiture-external-storage/80339/9
I hope this will help others on same situation.
ZFS is a rather interesting file system which allows storage to be used very flexibly and also manages the health of the data on the drives. Unlike RAID which cannot detect problems when the controller starts throwing errors (not v likely but not impossible) ZFS can. This link may be useful https://computingforgeeks.com/raid-vs-lvm-vs-zfs-comparison/ to understand the various methods of disk and data management. Can someone please explain what OCC is ? It appears to be some kind of command line methodology, but as much as the manual talks about it, it
ZFS is also a user-unfriendly resource hog. RAID is simple, effective, and reliable. Perhaps most importantly, a sysadmin who understands even just the basics of RAID knows exactly what to expect from it, such as the exact quantity and manner of disk failures a given array will tolerate.
A while ago, I worked with a Dell engineer who was setting up a hyper-converged Hyper-V cluster. He was telling me all about how it manages the data blocks and ensures there are distributed copies of everything. All very fancy high-tech stuff. Guess what he told me when I asked him how many disk failures it will tolerate? “It depends.” He went on to explain that it depends on the disk quorum and which disks hold the copies of a particular data block.
In other words, the exact conditions under which the system will fail and result in data loss are a variable. Can’t say I’m a fan.
OCC is a command-line interface for Nextcloud. Here is the relevant documentation chapter.
Thanks KarlF12. I could not find the information about occ in the manual but you pointed to a wealth of it.
One of the best sources for drive lifetimes is Backblaze Backblaze Hard Drive Stats which you may find useful. Surely filesystem failure is a combination of the underlying hardware and faults and omissions in the software, so what the Dell engineer said was probably correct - it depends on how data is handled. Your comment about the unpleasant feature of zfs resource demand made me want to dig around a little. Certainly zfs is initially an unfamiliar beast which requires a different viewpoint with as far as I know no GUI to deal with it (I would not be surprised to find I am wrong in this ) but to be honest, when I have used it for NAS (network attached storage) I have not found it to be a resource hog and extremely useful and adaptable. Certainly it takes some work to figure it out. What it is good for in my mind is the ease with which one can add storage to a pool of devices. Using Raid or RaidZ pairs as elements (vdevs) in a pool of devices allows additional storage to be added and the array of devices in the pool to be treated as a single element. Any devices that fail can be replaced and “resilvered” automatically. I would be interested in any information that you may have about zfs being a resource hog. Perhaps you mean that it requires a lot of memory - which is true. For those interested in using zfs from a user’s perspective I would suggest looking at ZFS Basics - An introduction to understanding ZFS - kbDone and this article for a deeper dive ZFS 101âUnderstanding ZFS storage and performance | Ars Technica .