What an exciting place this is! After watching all the drama with OC and NC, I finally, slowly, started entering the fray a week or so back. After multiple install phase- from bare hardware to the initial login on the admin configure webpage - Iām confident with the procedure and I think Iām ready to move on to the next stage. This, is where I hope you all come in.
I have an interesting piece of hardware to use this on. Itās a twin xeon machine with 32gb of ram. It has a 80gb SSD and a ~34TB raid 6 array (composed of 19 2tb SATA3 disk). Both the SSD and the data array are on a Adaptec SAS card and are empty other than Ubuntu and NC.
Iām interested in how best to make use of the disk space and speed. Currently Iāve installed Ubuntu 16.04 with Apache, MariaDB and PHP7 all on the 80 gb SSD with plans to put the data store on a single large ext4 partition on the raid 6 array.
BUT, is there a better way?
I can split the array into x number of mirrors, or striped mirrors or multiple raid 6 arrays. I would assume the OS and the database should remain on the SSD?
At this point Iām open to all suggestions.
The only other thing Iād like to mention is that Iād like to be able to use space on the data array as storage for disk based backups until (if?) the time comes when NC needs the space. It would be written to by a mix of 'nix and wintel machines. So, is that separate arrays? Or just a separate partition on the one large array?
Iām not hugely experienced with RAID, however the way I would do it, yes, would be to have the OS on the SSD (as itās more frequently accessed, and more likely to be deliberately wiped and replaced than your data) and your data kept secure on your RAID array.
Iām not really sure about the whole using the array for disk based backups thing, to be honest. Iāll be curious to see how your setup goes. Thatās a seriously nice piece of hardware, btw!
I considered it. But it seems I saw statements like āOfficially, the next-generation file system is still classified as unstableā and āThere is still a good amount of work left for btrfs, as not all features are yet implemented and performance is a little sluggish when compared to ext4ā frequently. Besides, itās primary benefits(?) of ācontinuous file system across multiple hard drivesā and " data mirroring" donāt appeal to me because of when you span data across individual drives without parity you increase the chances of total failure resulting in data loss (vs a single drive) by the number of drives spanned. As far as itās mirroring ability, OS or software mirroring doesnāt yet come close to dedicated hardware ability (especially in the case of my 71605Q). So I donāt really see where the trade off of an not yet finalized file system is offset by itās benefits, at least in my case.
BUT, itās very possible I could be missing a key point. That is why Iām here.
Interesting! Since itās currently empty, I figured Iād benchmark it with ext4 and then with BTRFS.
For anyone following this thread and wanting to play along:
When attempting to format after the ext4 benchmark, I received:
Error creating file system: Cannot run mkfs: cannot spawn āmkfs.btrfs -L
āWD30ā /dev/dm-5ā: Failed to execute child process āmkfs.btrfsā (No such
file or directory). This is a know bug (Bug 1090460).
To install:
sudo apt-get update
sudo apt-get install btrfs-tools
Now onto the benchmarks!
Array hardware configuration again: 19 2TB 3G sata drives on an Adaptec 71605Q in a RAID6 configuration.
This controller supports a feature similar L2ARC called MaxCache. It uses the SSD for additional cache for the most common accessed data points.
As far as JBOD, the controller supports it, but Iām confused as to how and why the OS would come close to the performance, or safety of hardware support.
I would also like clarification on having the OS and Data on the same physical disk. For the sake of cleanliness in the event of a recovery it seems like it would be much easier if the data array wasnāt intertwined with the OS. But again, that is how it is in the Windows world. So since I donāt know what I donāt know can you clarify or provide a link?
This is wonderful stuff. Youāve established redundancy. Iāll cede the point on performance for the sake of furthering the discussion. What happens in the event of a corrupted OS? Iām assuming you donāt do full disk backups, how do you separate the OS from the data in backup and recovery scenarios?
I assume that Raidz2/3 is susceptible to URE recovery issues just as hardware raid is since itās a disk mathematical issue?
Ok. Iām in - at least for a test. I had decided to go with an 18 disk RAID50 with 1 hotspare for the storage partition, but Iām a sucker for new experiences.
Any idea on source documentation to get Ubuntu/'nix installed with a ZFS pool and boot environment? I have ZERO experience with FreeBSD.
I have to head to a meeting, but Iāll be able to respond in an hour or two. Unless itās a question I can answer quickly on my phone (hardware config, etc).