Adding or changing storage

Greetings,
I have reached the max available space of my external storage on my ncp installation.
I have 2 questions related to it. So I have a choice how to solve this situation:

1) what is the procedure to replace the HD for a bigger one while maintaining my configuration (no to reinstall everthing)
2) how can I add a second HD to NC and have as it was a single storage device? (basically, the idea is to have more available storage using a second disk but showing only extra space)
If that is not possible, how I simply add extra HD to NC?
Is there an official procedure ?
Kind regards
and thanks for your help

Speaking generally, you could image the storage device onto the bigger one, and then expand your partition and filesystem.

This can be done, but is not recommended because you are doubling your chance of a storage failure. Loss of any disk in the span would mean loss of the entire volume.

If you want to go that route, at a minimum you should look at a RAID-5 array which would involve at least three disks, but would allow for one to be replaced when it fails without data loss.

thanks @KarlF12.
Option1) you mean dd or clonezilla the HD, and restoring it to a bigger disk?
the expansion, should I do that with gparted on a an other system, or there is a tool within NC?

Option2) no. maybe best not to go that way.
what about just adding a second storage disk ?

Sure, clonezilla should be fine.

As for the second storage disk, NC doesn’t support multiple data folders to my knowledge. And as so said, pooling them without redundancy is asking for data loss. RAID is the proper method for doing this.

1 Like

I thought to recall there was a possibility to add esternal storage like google drive… I need to refresh my memory but I think with same technique, is possible to add a second HD.
That said, surely is necessary to back up all data on a different separate device in case of data loss.

I think NC will sooner or later go to a more scalable configuration.
I think I am not the only one getting to fill up the external storage!
Dont you think so?
Thanks for the suggestions @KarlF12

It’s not so much a question of scalability as it is a technical issue. Any system faces the same issue when trying to add a disk to backend storage. It just has to do with how disks have to be formatted and mounted.

Even if NC added the ability to have multiple data folders and assign each user to one, you’d still have the same problem, a single failure would result in data loss and a server down.

I think another possibility is to format the new HDD with the same file system and copy all files from the old HDD to new HDD (same file-structure). Then mount the new HDD like the old HDD. It think it works.

1 Like

I understand and agree.
Yet, I hope for a future solution with easier scalability that will not impact any system with any issues… or any other sort of solution that will eventually allow a a system to grow easier.
I understand your point.
Today we have to deal with what we have.
I love NC… will look try the solution you suggested.

Hi @devnull, thanks for fill in.
Thats seems to be a similar solution…
imaging and restoring to a bigger disk or copying data, format on same fs and copy back data…
will see whats the easier among the 2 solutions.
I will report here and hopefully mark this as a solved issue/situation

There are strategies for this, for example you can use LVM which allows you to easily add in new storage and extend existing partitions across new disks. But, even with that, you still need to have a reliable storage backend. In other words, at least in a production environment, you would add new arrays of redundant disks, not individual disks.

ok.
I am not in a production environment. Yet, also privately I consider the data as important if not even more!
At the end, we rely on those tools to keep most of the documents and photos.

Actually I have a side question… maybe very important.
Q:
What is the official procedure to change external storage to a bigger one?
I just ordered a 4TB WD red for NAS.
Making an image of my 1T disk will imply a spare disk to image it and so restore it.
Copying the contents from one disk to the new one, would spare me a spare disk for imaging (probably preferred way).
I cant find a documentation for official documented procedure to migrate/change external storage.
Any idea?

Thanks guys for your help.
I had started a new post as I needed to know if there was an official supported procedure for storage change/switch.
between your suggestions and the other post, finally I got the result I wanted.
basically, a copy of the structure to new HD is the final solution.
Few consideration before doing that are needed:
A simple clone would not work as the new disk is bigger than 2TB. MBR dont work.
So you need to create a partition table on GPT.
Than copying files is longer and more risky. A clone disk-to-disk is better.
Finally Clonezilla with a small change in advanced option is the final solution.
Details I have written in my other post:
https://help.nextcloud.com/t/offial-procedure-to-substiture-external-storage/80339/9
I hope this will help others on same situation.
Thanks again

ZFS is a rather interesting file system which allows storage to be used very flexibly and also manages the health of the data on the drives. Unlike RAID which cannot detect problems when the controller starts throwing errors (not v likely but not impossible) ZFS can. This link may be useful https://computingforgeeks.com/raid-vs-lvm-vs-zfs-comparison/ to understand the various methods of disk and data management. Can someone please explain what OCC is ? It appears to be some kind of command line methodology, but as much as the manual talks about it, it

ZFS is also a user-unfriendly resource hog. RAID is simple, effective, and reliable. Perhaps most importantly, a sysadmin who understands even just the basics of RAID knows exactly what to expect from it, such as the exact quantity and manner of disk failures a given array will tolerate.

A while ago, I worked with a Dell engineer who was setting up a hyper-converged Hyper-V cluster. He was telling me all about how it manages the data blocks and ensures there are distributed copies of everything. All very fancy high-tech stuff. Guess what he told me when I asked him how many disk failures it will tolerate? “It depends.” He went on to explain that it depends on the disk quorum and which disks hold the copies of a particular data block.

In other words, the exact conditions under which the system will fail and result in data loss are a variable. Can’t say I’m a fan.

OCC is a command-line interface for Nextcloud. Here is the relevant documentation chapter.

https://docs.nextcloud.com/server/20/admin_manual/configuration_server/occ_command.html

Thanks KarlF12. I could not find the information about occ in the manual but you pointed to a wealth of it.

One of the best sources for drive lifetimes is Backblaze Backblaze Hard Drive Stats which you may find useful. Surely filesystem failure is a combination of the underlying hardware and faults and omissions in the software, so what the Dell engineer said was probably correct - it depends on how data is handled. Your comment about the unpleasant feature of zfs resource demand made me want to dig around a little. Certainly zfs is initially an unfamiliar beast which requires a different viewpoint with as far as I know no GUI to deal with it (I would not be surprised to find I am wrong in this ) but to be honest, when I have used it for NAS (network attached storage) I have not found it to be a resource hog and extremely useful and adaptable. Certainly it takes some work to figure it out. What it is good for in my mind is the ease with which one can add storage to a pool of devices. Using Raid or RaidZ pairs as elements (vdevs) in a pool of devices allows additional storage to be added and the array of devices in the pool to be treated as a single element. Any devices that fail can be replaced and “resilvered” automatically. I would be interested in any information that you may have about zfs being a resource hog. Perhaps you mean that it requires a lot of memory - which is true. For those interested in using zfs from a user’s perspective I would suggest looking at ZFS Basics - An introduction to understanding ZFS - kbDone and this article for a deeper dive ZFS 101—Understanding ZFS storage and performance | Ars Technica .

how to add second HD??

Hello. how to add second HD please? Thanks

You’ll need to start your own thread, provide details about your setup, and explain what it is you’re trying to do.

ok, thanks a lot