Advice on a new setup?

What an exciting place this is! After watching all the drama with OC and NC, I finally, slowly, started entering the fray a week or so back. After multiple install phase- from bare hardware to the initial login on the admin configure webpage - Iā€™m confident with the procedure and I think Iā€™m ready to move on to the next stage. This, is where I hope you all come in.

I have an interesting piece of hardware to use this on. Itā€™s a twin xeon machine with 32gb of ram. It has a 80gb SSD and a ~34TB raid 6 array (composed of 19 2tb SATA3 disk). Both the SSD and the data array are on a Adaptec SAS card and are empty other than Ubuntu and NC.

Iā€™m interested in how best to make use of the disk space and speed. Currently Iā€™ve installed Ubuntu 16.04 with Apache, MariaDB and PHP7 all on the 80 gb SSD with plans to put the data store on a single large ext4 partition on the raid 6 array.

BUT, is there a better way?

I can split the array into x number of mirrors, or striped mirrors or multiple raid 6 arrays. I would assume the OS and the database should remain on the SSD?

At this point Iā€™m open to all suggestions.

The only other thing Iā€™d like to mention is that Iā€™d like to be able to use space on the data array as storage for disk based backups until (if?) the time comes when NC needs the space. It would be written to by a mix of 'nix and wintel machines. So, is that separate arrays? Or just a separate partition on the one large array?

What say you?

Iā€™m not hugely experienced with RAID, however the way I would do it, yes, would be to have the OS on the SSD (as itā€™s more frequently accessed, and more likely to be deliberately wiped and replaced than your data) and your data kept secure on your RAID array.

Iā€™m not really sure about the whole using the array for disk based backups thing, to be honest. Iā€™ll be curious to see how your setup goes. Thatā€™s a seriously nice piece of hardware, btw!

Why ext3? Try btrfs instead :slight_smile:

I considered it. But it seems I saw statements like ā€œOfficially, the next-generation file system is still classified as unstableā€ and ā€œThere is still a good amount of work left for btrfs, as not all features are yet implemented and performance is a little sluggish when compared to ext4ā€ frequently. Besides, itā€™s primary benefits(?) of ā€œcontinuous file system across multiple hard drivesā€ and " data mirroring" donā€™t appeal to me because of when you span data across individual drives without parity you increase the chances of total failure resulting in data loss (vs a single drive) by the number of drives spanned. As far as itā€™s mirroring ability, OS or software mirroring doesnā€™t yet come close to dedicated hardware ability (especially in the case of my 71605Q). So I donā€™t really see where the trade off of an not yet finalized file system is offset by itā€™s benefits, at least in my case.

BUT, itā€™s very possible I could be missing a key point. That is why Iā€™m here.

Ok, another question. Any thoughts on the best method to configure the mount point?

I was considering something like this (assuming I say with one large raid6 array).

FUD :slight_smile:

BTRFS is present at OpenSUSE and Fedora distros by default (not online by default, but supports it by default)

And BTRFS is not only for data mirroring, it gives compression and it is much faster that ext3 :slight_smile:

Nextcloud published some talks of the last NC conference on their youtube channel:

There is also a talk about scalability where some large setups are presented and a Q&A session which could be interesting for you.

Interesting! Since itā€™s currently empty, I figured Iā€™d benchmark it with ext4 and then with BTRFS.

For anyone following this thread and wanting to play along:

When attempting to format after the ext4 benchmark, I received:

Error creating file system: Cannot run mkfs: cannot spawn ā€˜mkfs.btrfs -L
ā€œWD30ā€ /dev/dm-5ā€™: Failed to execute child process ā€œmkfs.btrfsā€ (No such
file or directory). This is a know bug (Bug 1090460).

To install:
sudo apt-get update
sudo apt-get install btrfs-tools

Now onto the benchmarks!

Array hardware configuration again: 19 2TB 3G sata drives on an Adaptec 71605Q in a RAID6 configuration.

ext4:

BTRFS:

So what am I missing here?

This controller supports a feature similar L2ARC called MaxCache. It uses the SSD for additional cache for the most common accessed data points.

As far as JBOD, the controller supports it, but Iā€™m confused as to how and why the OS would come close to the performance, or safety of hardware support.

I would also like clarification on having the OS and Data on the same physical disk. For the sake of cleanliness in the event of a recovery it seems like it would be much easier if the data array wasnā€™t intertwined with the OS. But again, that is how it is in the Windows world. So since I donā€™t know what I donā€™t know :wink: can you clarify or provide a link?

This is wonderful stuff. Youā€™ve established redundancy. Iā€™ll cede the point on performance for the sake of furthering the discussion. What happens in the event of a corrupted OS? Iā€™m assuming you donā€™t do full disk backups, how do you separate the OS from the data in backup and recovery scenarios?

I assume that Raidz2/3 is susceptible to URE recovery issues just as hardware raid is since itā€™s a disk mathematical issue?

Ok. Iā€™m in - at least for a test. I had decided to go with an 18 disk RAID50 with 1 hotspare for the storage partition, but Iā€™m a sucker for new experiences.

Any idea on source documentation to get Ubuntu/'nix installed with a ZFS pool and boot environment? I have ZERO experience with FreeBSD.

I have to head to a meeting, but Iā€™ll be able to respond in an hour or two. Unless itā€™s a question I can answer quickly on my phone (hardware config, etc).