have you tried to work with Proxmox VE (plus ZFS) or some other alternative to FreeBSD 's Jails?
Running a docker for each process/app you want to isolate might like to take a sledgehammer to crack a nut.
why doing this instead of OMV?
sorry I need >1 TB for pictures but that can still be connected by USB, can’t it?
x86 means you have an ease of choice of images rather than what is often specific uboot supplied custom images.
There is a big picture of what I have x4 of???]
Yeah Media IE big relatively static files of media server ilk then USB & Snapraid is a decent solution.
USB Raid is just seriously bad and not a good idea, so much so OMV deliberately doesn’t support USB to save on forum posts.
Jmicron 5 port any device that has pciex4 like the rk3399 or likely the H2 but have never tested.
Considering that the Raspberry Pi 4 recently gained support in Nextcloud Pi, I say that the Raspberry Pi will be a pretty tough act to follow for a cheap, decently reliable SBC for running Nextcloud on. Especially when you attach an SSD in an enclosure to the USB 3, and run the SQL DB there, as well as store the data dir there (and the ncp utilities make it easy to move these).
Yes, this is true, I strongly agree here. RAID was never, ever meant to be used when a laxidaisical USB bus is in the position of middleman, which can possibly take it’s sweet time responding to the OS, should, say, a USB mouse (or some other USB peripheral like that) get plugged in suddenly.
RAID should only be used when proper SATA (which is designed for DATA only, not mice, printers, and suchlike, and is high speed, low latency, and highly reliable) is what connects the disks to the PCI bus. Or something rougly equivalent to SATA and PCI, which are highly mature in the Linux kernel.
It’s advisable that home hobbyists leave RAID alone altogether, as it can be more complicated than is really practical. If you’re a home user, it’s best to start simply, with one large, single disk, and maybe work your way up after gaining experience just administering that (and if you want a second disk, add it for having a backup of the first disk).
RAID on an SBC is also asking for trouble, I say. RAID on a plain-Jane ARM64 PC (with a good quality mobo, like an ASUS, and not a cheap, flaky mobo, which might freeze under heavy load) has much better odds of having no filesystem stability issues over the long term.
If you don’t want to believe me here, please go scare some sense into yourself by reading the Armbian forums, which have plenty of filesystem instability horror stories when kernel support just wasn’t mature enough for some SBC, or some cheap controller chips on all-too-cheap SBC’s flaked out, even when kernel support was stable enough.
I’ve learned the hard way not to go too cheap when choosing an SBC. The Raspberry Pi is a safe bet for a stable SBC, and that’s the only SBC maker I personally trust 100% and would therefore recommend at this time. Spending just that little bit more to get something you know you can trust, is well worth it. Don’t take risks buying hardware for a server where deep down you know it’s a gamble.
If you want something cheap, rack mountable (which will be noisy, due to power supply fans), and used, Noah Chelliah of the Ask Noah show recently recommended the Dell R710 as an easy-to-find, go-to server, great for tinkering with, especially if you want to try ZFS RAID or something. Listen to his recent podcast here for more info on that:
I took a deep breath and dived deeper into that topic (but still don’t understand what I mentioned in my previous post, may @Stuart_Naylor could explain it).
I have learned a way more things about the architecture and the impact on how easily/reliable/flawless I can deploy and enjoy the hardware.
I list my comments/findings below:
but it saved me once, as one disk in RAID 1 has failed recently.
Comparing the RAID configuration, I have to RAID 5 is enough for me (I’m aware of performacne and risks), as the risk of two failing disk at exactly the same time is really low in a home user environment. Second important data is backed up.
THE FOLLOWING PARAGRAPH NEEDS ADVICE!
So the only scenario I can think of if HDD raid with an SSD cache. In case of the H2, you may put either SSD or HDD on the PCIe otherwise you might be bottlenecked by the chipset doing all the read and write operations, see the corresponding block diagram.
May you start with e.MCC as it is cheap and extend by some budget SSD as suggested further above
For those who face the same challenges to cope with all these options but getting lost in all these interface standards and their multiple versions as I did at the beginning of my dive, I list the sources what helped me in this regard below:
It is the standard for HDDs und first generation SSDs
SATA is the current computer bus interface for connecting a hard drive or SSD, or optical drive to the rest of the computer. SATA replaced the older PATA, offering several advantages over the older interface: reduced cable size and cost (seven conductors instead of 40), native hot swapping, faster data transfer through higher signaling rates, and more efficient transfer through an (optional) I/O queuing protocol. Since its introduction there have been three main revisions doubling bandwidth from the previous and allowing for extra advanced features while maintaining the same physical connector.
SATA 2 connectors on a computer motherboard, all but two with cables plugged in. Note that there is no visible difference, other than the labeling, between SATA 1, SATA 2, and SATA 3 cables and connectors.
Standardized in 2004, eSATA ( e standing for external) provides a variant of SATA meant for external connectivity. It uses a more robust connector, longer shielded cables, and stricter (but backward-compatible) electrical standards. The protocol and logical signaling (link/transport layers and above) are identical to internal SATA.
The physical dimensions of the mSATA connector are identical to those of the PCI Express Mini Card interface, but the interfaces are electrically not compatible; the data signals (TX±/RX± SATA, PETn0 PETp0 PERn0 PERp0 PCI Express) need a connection to the SATA host controller instead of the PCI Express host controller.
An mSATA SSD on top of a 2.5-inch SATA drive
are any kind of hardware, extending the functionality of PCs, they are plugged into a conventional PCI slot.
In computing, the expansion card, expansion board, adapter card or accessory card is a printed circuit board that can be inserted into an electrical connector, or expansion slot, on a computer motherboard, backplane or riser card to add functionality to a computer system via the expansion bus.
Conventional PCI, often shortened to PCI, is a local computer bus for attaching hardware devices in a computer. PCI is an abbreviation for Peripheral Component Interconnect and is part of the PCI Local Bus standard.
Intel launched their PCI bus chipsets along with the P5-based Pentium CPUs in 1993. The PCI bus was introduced in 1991 as a replacement for ISA. The standard (now at version 3.0) is found on PC motherboards to this day. The PCI standard supports bus bridging: as many as ten daisy chained PCI buses have been tested. Cardbus, using the PCMCIA connector, is a PCI format that attaches peripherals to the Host PCI Bus via PCI to PCI Bridge. Cardbus is being supplanted by ExpressCard format.
PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe or PCI-e, is a high-speed serial computer expansion bus standard, designed to replace the older PCI, PCI-X and AGP bus standards. It is the common motherboard interface for personal computers’ graphics cards, hard drives, SSDs, Wi-Fi and Ethernet hardware connections.
Peripheral Component Interconnect Express, or PCIe, is a physical interconnect for motherboard expansion. Normally this is the connector slot you plug your graphics card, network card, sound card, or for storage purposes, a RAID card into. PCIe was designed to replace the older PCI, PCI-X, and AGP bus standards and to allow for more flexibility for expansion.
Improvements include higher maximum bandwidth, lower I/O pin count and smaller physical footprint, better performance-scaling, more detailed error detection and reporting, and hot-plugging.
The physical connector on the motherboard typically allows for up to 16 lanes for data transfer. A PCIe device that is an x4 device can fit into a PCIe x4 slot up to an x16 slot and still function. PCIe 1.0 allowed for 250MB/s per lane, PCIe 2.0 allows for 500MB/s per lane and the newest PCIe 3.0 allows for 1GB/s per lane.
However, in real world throughput PCIe 2.0 allows for around 400MB/s due to its 8b/10b encoding scheme, while PCIe 3.0 allows for 985MB/s due to its improved 128b/130b encoding scheme.
With that, by multiplying lane speed by the number of lanes gives us a theoretical maximum speed for that slot. Cards are generally backward compatible and the PCIe is full-duplex (data goes both ways at one time, unlike SATA).
M.2, formerly known as the Next Generation Form Factor (NGFF), is a specification for internally mounted computer expansion cards and associated connectors. It replaces the mSATA standard, which uses the PCI Express Mini Card physical card layout and connectors.
M.2 can run an SSD over SATA (different than mSATA) or PCIe. The difference between M.2 SATA and M.2 PCIe can be discerned by their key notches…
M.2’s more flexible physical specification allows different module widths and lengths, and, paired with the availability of more advanced interfacing features, makes the M.2 more suitable than mSATA for solid-state storage applications in general and particularly for the use in small devices such as ultrabooks or tablets.
The M.2 specification provides up to four PCI Express lanes and one logical SATA 3.0 (6 Gbit/s) port, and exposes them through the same connector so both PCI Express and SATA storage devices may exist in the form of M.2 modules. Exposed PCI Express lanes provide a pure PCI Express connection between the host and storage device, with no additional layers of bus abstraction.PCI-SIG M.2 specification, in its revision 1.0 as of December 2013, provides detailed M.2 specifications
The M.2 standard is an improved revision of the mSATA connector design. It allows for more flexibility in the manufacturing of not only SSDs, Wi-Fi, Bluetooth, satellite navigation, near field communication (NFC), digital radio, Wireless Gigabit Alliance (WiGig), and wireless WAN (WWAN).
On the consumer end, SSDs especially benefit due to the ability to have double the storage capacity than that of an equivalent mSATA device. Furthermore, having a smaller and more flexible physical specification, together with more advanced features, the M.2 is more suitable for solid-state storage applications in general. The form factor supports one SATA port at up to 6Gb/s or 4 PCIe 3.0 lanes at 4GB/s.
However, not everything is quite so rosy with M.2. Unlike SATA drives - where every drive is the same physical size and uses the same cables - M.2 allows for a variety of physical dimensions, connectors, and even multiple logical interfaces. To help our customers understand the nuances of M.2 drives we decided to publish this overview of the current M.2 specifications.
Physical size and connectors
Unlike SATA drives, M.2 allows for a variety of physical sizes. Right now, all M.2 drives that are intended for use in PCs are 22mm wide, but they come in a variety of lengths. To make it easier to tell which drives can be mounted on a motherboard or PCI-E card, the width and height of both the drive and slot is usually expressed in a single number that combines the two dimensions. For example, a drive that is 22mm wide and 80mm long would be listed as being a 2280 (22mm x 80mm). Common lengths for M.2 drives and mounting right now are 30mm (2230), 42mm (2242), 60mm (2260), 80mm (2280), and 110mm (22110).
In addition, there are two types of sockets for M.2: one with a “B key” and one with a “M key”.
M.2 socket types
B+M keyed drive (left) and a M keyed drive (right)
M.2 to PCI-E x4 adapter that can take multiple lengths of drives
The different keys are what indicated the maximum number of PCI-E lanes the socket can use and physically limit what drives can be installed into the socket… A “B key” can utilize up to two PCI-E lanes while a “M key” can use up to four PCI-E lanes.
Right now, however, the majority of M.2 sockets use a “M key” even if the socket only uses two PCI-E lanes. As for the drives, most PCI-E x2 drives are keyed for B+M (so they can work with any socket) and PCI-E x4 drives are keyed just for M.
This is confusing at first since it is much more complicated than SATA, but all of this information should be listed in the specs of both the drive and motherboard/PCI-E card. For example, the ASUS Z97-A lists the M.2 slot as “M.2 Socket 3, with M Key, type 2260/2280” so it supports drives that are 22mm wide and either 60mm or 80mm long with a M key.
M.2 logical interfaces
In addition to the different physical sizes, M.2 drives are further complicated by the fact that different M.2 drives connect to the system through three different kinds of logical interfaces.
Currently, M.2 drives can connect through either the SATA controller or through the PCI-E bus in either x2 or x4 mode. The nice thing is that all M.2 drives (at least at the time of this article) are all backwards compatible with SATA so any M.2 drive should work in a M.2 socket that uses SATA - although they will be limited to SATA speeds.
At the same time, not all M.2 sockets will be compatible with both SATA and PCI-E - some are either one or the other. So if you try to use a PCI-E drive in a SATA-only M.2 slot (or vice-versa) it will not function correctly.
M.2 PCI-E drives should really be used in a socket that supports the same number of PCI-E lanes as the drive for maximum performance, although any M.2 PCI-E drive will work in either a PCI-E x2 or PCI-E x4 socket provided they have the same key.
However, if you install a PCI-E x4 drive into a PCI-E x2 socket it will be limited to PCI-E x2 speeds. At the same time, installing a PCI-E x2 drive into a PCI-E x4 socket will not give you any better performance than installing it into a PCI-E x2 socket.
Basically, what it comes down to is that even if a M.2 drive physically fits into a M.2 socket, you also need to make sure that the M.2 socket supports the type of M.2 drive you have. In truth, the only time a M.2 drive shouldn’t work at all even through the keying matches is if you try to use a M.2 SATA drive in a M.2 PCI-E only socket.
Almost every computer has a USB port, making USB the ideal interface for drives that are
used on more than just your own computer. For single HDDs, even the first generation of USB 3.1 (USB 3.0) is fast enough and will not limit your transfer rate.
For SSDs, it’s best to use the second generation of USB 3.1 at 10Gbps but for multiple drives, the transfer rate will be limited to around 700-800 MB/s and that’s with the faster USB 3.1 Gen 2 interface.
With Thunderbolt 3, currently the latest generation of the Thunderbolt interface, you get plenty of bandwidth even for multiple drives and when daisy chaining additional Thunderbolt drives. The bottleneck of the Thunderbolt 3 interface is at around 2750 MB/s but for now, only certain NVMe SSDs can reach these kind of speeds, so in most cases, the transfer rate will not be limited.
With Thunderbolt 2, the bottleneck is at around 1375 MB/s. This kind of bandwidth is ideal for up to 4 SATA-III drives but it’s not fast enough for an NVMe based SSD and even four SATA-III SSDs can be limited by this interface.
The first generation of Thunderbolt is similar to the USB 3.1 Gen 2 interface. The transfer rate will be limited to around 700-800 MB/s, which is ideal for multiple HDDs or 1-2 SSDs but not for more than 2 drives.
The Serial ATA (SATA) interface was designed primarily for interfacing with hard disk drives (HDDs), doubling its native speed with each major revision: maximum SATA transfer speeds went from 1.5 Gbit/s in SATA 1.0 (standardized in 2003), through 3 Gbit/s in SATA 2.0 (standardized in 2004), to 6 Gbit/s as provided by SATA 3.0 (standardized in 2009).
SATA Express interface supports both PCI Express and SATA storage devices by exposing two PCI Express 2.0 or 3.0 lanes and two SATA 3.0 (6 Gbit/s) ports through the same host-side SATA Express connector (but not both at the same time).
The choice of PCI Express also enables scaling up the performance of SATA Express interface by using multiple lanes and different versions of PCI Express.
In more detail, using two PCI Express 2.0 lanes provides a total bandwidth of 1 GB/s (2 × 5 GT/s raw data rate and 8b/10b encoding, equating to effective 1000 MB/s), while using two PCI Express 3.0 lanes provides close to 2 GB/s (2 × 8 GT/s raw data rate and 128b/130b encoding, equating to effective 1969 MB/s).
In comparison, the 6 Gbit/s raw bandwidth of SATA 3.0 equates effectively to 0.6 GB/s due to the overhead introduced by 8b/10b encoding.
There are three options available for the logical device interfaces and command sets used for interfacing with storage devices connected to a SATA Express controller:
Used for backward compatibility with legacy SATA devices, and interfaced through the AHCI driver and legacy SATA 3.0 (6 Gbit/s) ports provided by a SATA Express controller.
PCI Express using AHCI
Used for PCI Express SSDs and interfaced through the AHCI driver and provided PCI Express lanes, providing backward compatibility with widespread SATA support in operating systems at the cost of not delivering optimal performance by using AHCI for accessing PCI Express SSDs.
AHCI was developed back at the time when the purpose of a host bus adapter (HBA) in a system was to connect the CPU/memory subsystem with a much slower storage subsystem based on rotating magnetic media; as a result, AHCI has some inherent inefficiencies when applied to SSD devices, which behave much more like DRAM than like spinning media.
PCI Express using NVMe
Used for PCI Express SSDs and interfaced through the NVMe driver and provided PCI Express lanes, as a high-performance and scalable host controller interface designed and optimized especially for interfacing with PCI Express SSDs.
NVMe has been designed from the ground up, capitalizing on the low latency and parallelism of PCI Express SSDs, and complementing the parallelism of contemporary CPUs, platforms and applications.
At a high level, primary advantages of NVMe over AHCI relate to NVMe’s ability to exploit parallelism in host hardware and software, based on its design advantages that include data transfers with fewer stages, greater depth of command queues, and more efficient interrupt processing.
A high-level overview of the SATA Express software architecture, which supports both legacy SATA and PCI Express storage devices, with AHCI and NVMe as the logical device interfaces
SATA Express, initially standardized in the SATA 3.2 specification, is a newer computer bus interface that supports either SATA or PCIe storage devices. The host connector is backward compatible with the standard 3.5-inch SATA data connector, while also providing multiple PCI Express lanes as a pure PCI Express connection to the storage device.
The physical connector will allow up to two legacy SATA devices to be connected if a SATA Express device is not used. The industry is moving forward with SATA Express now rather than SATA 12Gb/s. SATA Express was born because it was concluded that SATA 12Gb/s would require too many changes, be more costly and have higher power consumption than desirable.
For example, 2 lanes of PCIe 3.0 offers 3.3x the performance of SATA 6Gb/s with only 4% increase in power. (2 × PCIe 3.0 lanes with 128b/130b encoding, results in 1969 MB/s bandwidth) 2 lanes of PCIe 3.0 would be 1.6x higher performance and would consume less power than a hypothetical SATA 12Gb/s.
SATA Express is not widely implemented at this time so I am not going to go into much more detail about it. However, keep in mind as of now SATA express SSDs will normally be limited to the chipset and implementation limitations in terms of speed when compared to the potential of true PCIe SSDs.
NVM Express ( NVMe ) or Non-Volatile Memory Host Controller Interface Specification ( NVMHCIS ) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express(PCIe) bus. The acronym NVM stands for non-volatile memory , which is often NAND flash memory that comes in several physical form factors, including solid-state drives (SSDs), PCI Express (PCIe) add-in cards, M.2 cards, and other forms.
NVM Express, as a logical device interface, has been designed to capitalize on the low latency and internal parallelism of solid-state storage devices.
By its design, NVM Express allows host hardware and software to fully exploit the levels of parallelism possible in modern SSDs. As a result, NVM Express reduces I/O overhead and brings various performance improvements relative to previous logical-device interfaces, including multiple long command queues, and reduced latency.
(The previous interface protocols were developed for use with far slower hard disk drives (HDD) where a very lengthy delay (relative to CPU operations) exists between a request and data transfer, where data speeds are much slower than RAM speeds, and where disk rotation and seek time give rise to further optimization requirements.)
NVMe or Non-Volatile Memory Host Controller Interface Specification (NVMHCI) is a new and backward-compatible interface specification for solid state drives. It is like that of the SATA modes IDE, AHCI, and RAID, but specifically for PCIe SSDs. It is to support either SATA (I believe specifically SATA Express) or PCI Express storage devices.
As you know, most SSDs we use connect via SATA, but that interface was made for mechanical hard drives and lags behind due to SSD’s design being more DRAM like. AHCI has a benefit of compatibility with legacy software. NVMe is much more efficient than AHCI and cuts out lot of overhead because of it. NVMe has the ability to take more advantage of lower latency and parallelism of CPUs, platforms and applications to improve performance
Flash at PCIe Speeds
One great feature of PCIe is its direct connection to the CPU. This streamlines the storage device stack, completely eliminating much of the complexity and layers present in SATA protocol stacks. As a result, NVMe delivers 2X the performance of SAS 12 Gb/s, and 4-6X of SATA 6 Gb/s in random workloads. For sequential workloads, NVMe delivers 2X the performance of SAS 12 Gb/s, and 4X of SATA 6 Gb/s.
(Source: “All About M.2 SSDs,” Storage Networking Industry Association [SNIA]. 2014.)
By taking advantage of PCIe, NVMe reduces latency, enables faster access, and delivers higher Input/Output per Second (IOPS) compared with other interfaces designed for mechanical storage devices. NVMe also offers performance across multiple cores for quick access to critical data, scalability for current and future performance and support for standard security protocols.
Continuous operation and RAID configuration are what makes NAS HDDs stand out from desktop HDDs. A NAS HDD is designed to run for weeks on end, while a desktop HDD can only read and write data for hours at a time. A NAS HDD is also built specifically for RAID setup. By combining multiple drives into one single logical unit, RAID configurations provide data redundancy, thus protecting data against drive failures.
They come mainly in 3.5" formfactor, 2.5" stops at 1TB, see the main vendors
eMMC storage is mostly found in phones, as well as compact, budget laptops or tablets. The “embedded” part of the name comes from the fact that the storage is usually soldered directly onto the device’s motherboard. eMMC storage consists of NAND flash memory — the same stuff you’ll find in USB thumb drives, SD cards, and solid-state drives (SSD) — which doesn’t require power to retain data.
Despite both containing a type of NAND memory, SSDs and eMMC storage are quite different.
I am considering buying arm board to run nextcloud pi. Since 2 year i have put nextcloud on my website hosting service. Nextcloud work great and is more and more pwerfull. I want to host my nextcloud at home to get more space and be able to use advanced functions.
So i read the docs, i test it on my PC with docker, i search for board, check the benchmark on board and read forum post and github issues.
My critical question is :
Raspberry Pi 4 (RPi4) VS other (Odroid XU4 for example)
First thougth : Don’t take Raspberry 3B+. This is based on the article : should i use a raspberry pi. I then understand the main drawback for Raspberry was no USB3 support, perfomance compared to other SBC, overheating.
Then I learned about the specification of the new Raspberry 4. It made it quite equivalent with other SBC proposed. I have seen that it is compatible (article here). And @esbeeb said before in the conversation it seems to be a decent option.
In the end I see only small differences between Raspberry Pi4 and Odroid XU4 or Rock64. For me the Raspberry bigger community could be an advantage to have more tutorial/help found on the web, more long term “support” and to have other use of the board possible.
So my question :
What are the reason not to choose raspberry 4 for non-technical people ?
Reasons: super low power-consuption with no heat problems even under sustained load. ARM8 crypto extensions, eMMC drive, fast RAM and sufficient and flexible power input to run multiple external USB drives.
@just : I have joined the telegram channel. if i could help.
@Krischan thanks for sharing info on this new board. If i have understand well i see that it is better but not game changer compared to pi 4 (but it is a huge change compared to the pi 3b+).
I think buying a raspberry 4 because it is a one of the good board to use and then i may use it easier if I stop this nextcloud project for any reason. I am may be wrong but i think that raspberry could have easier other use (as beginer) or that it could be selled easier to other people that may have use of it.
Actually, in this case the Odroid wins. Don’t get me wrong, the Pi 4 is an excellent device, but… NextcloudPi and tools like Docker are actually directly integrated into the setup menu of all Odroid devices thanks to Armbian! Super awesome and could not be easier to get started.
The C4 is sweet for multiple reasons:
Much better power consumption at only 2A!
Barrel jack or USB power.
4 ports at USB 3.0 vs the 2 ports
Same price as Pi4.
No wifi or bluetooth on-board.
emmc slot + micro-sd
This is a beast! Also, the older Odroid HC1 (2.5") or HC2 (3.5") will directly connect SATA drives so you’ll be moving your data on big drives at full gigabit ethernet speeds!
I am currently running my NC on a QNAP NAS with docker. But since this system is very power consuming I was thinking about changing to a less power needing setup.
I am using my NC with 5 users mainly for upload and sharing of pictures through the clients (Windows and Android). I am planing to install further apps like Collabora or OnlyOffice, but there will not be a lot of simultaneous work of different users in these apps.
As a new setup I was thinking about to buy an Odroid-C4, since it does not use a lot of energy and still seems to have enough computing power. In addition I would buy me a WD Red SA500 SSD 2.5" harddrive. Is it possible to power this harddrive only through the standard Odroid power source (12V, 2 A Power supply; with USB-SATA-Adapter)? This harddrive is also available as M.2, does this make more sense, since it is smaller? What is the problem there? The Odroid-C4 only has USB-connectors, I will therefore need an adapter cable. Is there something special I need to consider if I buy one?
Do you think this is a reasonable setup? Or do you think I need to reconsider something? Or would you recommend me some other products which would even speed up my setup? In my new setup I will use my QNAP NAS for the backup and therefore only start it at certain times.
Thank you for your answer. I again had a look into it.
Indeed, the Odroid H2+ looks interesting, but is more expensive. Do I understand correctly, that I also need to buy the RAM seperatly for the H2+? The second drawback is, that it needs more power, but as far as I understand it will be easily possible to speed it up (add further RAM) at a later point.
Do I have other limitations with the Ordoid-C4, because it does not have x86 CPU?
Another option will be a Raspberry Pi 4… Hard decision to take
Edit: Just saw that the Raspberry Pi also is based on an ARM cpu. Therefore, I don’t see an advantage in the Raspberry Pi and only need to decide between Ordoid-C4 and Odroid H2+.
Yes RAM is a separate purchase for the H2+ (regular Laptop DDR4 ram I think), which brings the price close to some other NUC like PCs to be honest.
ARM CPUs are mostly well supported for server software (less so for Desktop software and games), but you might have to compile some software yourself. However since the RasberryPI is so popular, Armbian and other ARM focused GNU/Linux distributions have a wide range of software precompiled for ARM.
For a home server kind of setup, IMHO there is really no reason to chose the RasberryPI4 over the Odroid-C4. But if for some reason you end up using it for other purposes (like a media center) then the RPI4 has a bit of an advantage as software support in general is a bit better.