What an exciting place this is! After watching all the drama with OC and NC, I finally, slowly, started entering the fray a week or so back. After multiple install phase- from bare hardware to the initial login on the admin configure webpage - I’m confident with the procedure and I think I’m ready to move on to the next stage. This, is where I hope you all come in.
I have an interesting piece of hardware to use this on. It’s a twin xeon machine with 32gb of ram. It has a 80gb SSD and a ~34TB raid 6 array (composed of 19 2tb SATA3 disk). Both the SSD and the data array are on a Adaptec SAS card and are empty other than Ubuntu and NC.
I’m interested in how best to make use of the disk space and speed. Currently I’ve installed Ubuntu 16.04 with Apache, MariaDB and PHP7 all on the 80 gb SSD with plans to put the data store on a single large ext4 partition on the raid 6 array.
BUT, is there a better way?
I can split the array into x number of mirrors, or striped mirrors or multiple raid 6 arrays. I would assume the OS and the database should remain on the SSD?
At this point I’m open to all suggestions.
The only other thing I’d like to mention is that I’d like to be able to use space on the data array as storage for disk based backups until (if?) the time comes when NC needs the space. It would be written to by a mix of 'nix and wintel machines. So, is that separate arrays? Or just a separate partition on the one large array?
I’m not hugely experienced with RAID, however the way I would do it, yes, would be to have the OS on the SSD (as it’s more frequently accessed, and more likely to be deliberately wiped and replaced than your data) and your data kept secure on your RAID array.
I’m not really sure about the whole using the array for disk based backups thing, to be honest. I’ll be curious to see how your setup goes. That’s a seriously nice piece of hardware, btw!
I considered it. But it seems I saw statements like “Officially, the next-generation file system is still classified as unstable” and “There is still a good amount of work left for btrfs, as not all features are yet implemented and performance is a little sluggish when compared to ext4” frequently. Besides, it’s primary benefits(?) of “continuous file system across multiple hard drives” and " data mirroring" don’t appeal to me because of when you span data across individual drives without parity you increase the chances of total failure resulting in data loss (vs a single drive) by the number of drives spanned. As far as it’s mirroring ability, OS or software mirroring doesn’t yet come close to dedicated hardware ability (especially in the case of my 71605Q). So I don’t really see where the trade off of an not yet finalized file system is offset by it’s benefits, at least in my case.
BUT, it’s very possible I could be missing a key point. That is why I’m here.
This controller supports a feature similar L2ARC called MaxCache. It uses the SSD for additional cache for the most common accessed data points.
As far as JBOD, the controller supports it, but I’m confused as to how and why the OS would come close to the performance, or safety of hardware support.
I would also like clarification on having the OS and Data on the same physical disk. For the sake of cleanliness in the event of a recovery it seems like it would be much easier if the data array wasn’t intertwined with the OS. But again, that is how it is in the Windows world. So since I don’t know what I don’t know can you clarify or provide a link?
This is wonderful stuff. You’ve established redundancy. I’ll cede the point on performance for the sake of furthering the discussion. What happens in the event of a corrupted OS? I’m assuming you don’t do full disk backups, how do you separate the OS from the data in backup and recovery scenarios?
I assume that Raidz2/3 is susceptible to URE recovery issues just as hardware raid is since it’s a disk mathematical issue?