Best cheap hardware to run Nextcloud on?

sorry for being that silent that long but I had things to get clarified in regard ofhome-assistants hardware requirements.

it is worked on Rock64

have you tried to work with Proxmox VE (plus ZFS) or some other alternative to FreeBSD 's Jails?
Running a docker for each process/app you want to isolate might like to take a sledgehammer to crack a nut.

why doing this instead of OMV?

sorry I need >1 TB for pictures but that can still be connected by USB, can’t it?

and there is quite a variety of RK3399 devices.
Besides these , just out of curiosity, which SBC playing the same league as the RockPro64?

H2 look quite good bit more expensive but the advantage of x86, XU4/Tinker are not really same league.

There are a range of rk3399 boards $50 - $80 that with pcie2.0 x4 and Soc performance are hard to beat at that price.

I have x4 of these really cheap with 12v input so you can drive 3.5" drives off a splitter of the 12v PSU for the RockPi4 / Rockpro64
https://www.ebay.co.uk/itm/UASP-USB-3-0-TO-SATA-Converter-Adapter-Cable-2-5-3-5-HDD-SSD-SPEEDY-20-Faster/264223704252

There are others also but USB is a stinker for concurrency and port sharing and completely bad for Raid.
But with OMV and Snapraid as its called really good for large media storage setups.

For media prob USB & Snapraid but with USB its No! to RAID5,6,10
There is a 5 port Sata device by Jmicron which might be really interesting for Raid6 as the Raid6 mdadm drivers have Neon optimisation.

so you have 4 x Odroid-XU4?

why should I run a RockPro64 behind Odroid-XU4? :thinking:
A Nano could make sense to do always on task, couldn’t it?

:question::question::question:

you properly mean JMicron’s bridge controller series but which devices are using them?

it would give me more RAM (what I have to pay by myself), not sure what the x86 would do better than the H2 or Pro64, maybe graphics but it is a server.

x86 means you have an ease of choice of images rather than what is often specific uboot supplied custom images.

There is a big picture of what I have x4 of???]

Yeah Media IE big relatively static files of media server ilk then USB & Snapraid is a decent solution.
USB Raid is just seriously bad and not a good idea, so much so OMV deliberately doesn’t support USB to save on forum posts.

Jmicron 5 port any device that has pciex4 like the rk3399 or likely the H2 but have never tested.

https://www.sybausa.com/index.php?route=product/product&product_id=1028

I’m really sorry but I’m getting lost

  1. What do you mean by 4x and picture, cannot find any

  2. What do you mean by media prob USB

  3. Media IE big

  4. What do you mean by

Considering that the Raspberry Pi 4 recently gained support in Nextcloud Pi, I say that the Raspberry Pi will be a pretty tough act to follow for a cheap, decently reliable SBC for running Nextcloud on. Especially when you attach an SSD in an enclosure to the USB 3, and run the SQL DB there, as well as store the data dir there (and the ncp utilities make it easy to move these).

“NextCloudPi gets RPi4 support, a backup UI, moves to NC16.0.3, Buster, PHP7.3 and more”:
https://ownyourbits.com/2019/08/05/nextcloudpi-gets-a-backup-ui-moves-to-nc16-0-3-buster-php7-3-and-more/

PS: I recommend buying a 2.5" SATA-to-USB 3 enclosure which has either a JMicron JMS578 or JMS567 controller chip, if you can find it. Those are known to work really well in Linux.

Yes, this is true, I strongly agree here. RAID was never, ever meant to be used when a laxidaisical USB bus is in the position of middleman, which can possibly take it’s sweet time responding to the OS, should, say, a USB mouse (or some other USB peripheral like that) get plugged in suddenly.

RAID should only be used when proper SATA (which is designed for DATA only, not mice, printers, and suchlike, and is high speed, low latency, and highly reliable) is what connects the disks to the PCI bus. Or something rougly equivalent to SATA and PCI, which are highly mature in the Linux kernel.

It’s advisable that home hobbyists leave RAID alone altogether, as it can be more complicated than is really practical. If you’re a home user, it’s best to start simply, with one large, single disk, and maybe work your way up after gaining experience just administering that (and if you want a second disk, add it for having a backup of the first disk).

RAID on an SBC is also asking for trouble, I say. RAID on a plain-Jane ARM64 PC (with a good quality mobo, like an ASUS, and not a cheap, flaky mobo, which might freeze under heavy load) has much better odds of having no filesystem stability issues over the long term.

If you don’t want to believe me here, please go scare some sense into yourself by reading the Armbian forums, which have plenty of filesystem instability horror stories when kernel support just wasn’t mature enough for some SBC, or some cheap controller chips on all-too-cheap SBC’s flaked out, even when kernel support was stable enough.

I’ve learned the hard way not to go too cheap when choosing an SBC. The Raspberry Pi is a safe bet for a stable SBC, and that’s the only SBC maker I personally trust 100% and would therefore recommend at this time. Spending just that little bit more to get something you know you can trust, is well worth it. Don’t take risks buying hardware for a server where deep down you know it’s a gamble.

If you want something cheap, rack mountable (which will be noisy, due to power supply fans), and used, Noah Chelliah of the Ask Noah show recently recommended the Dell R710 as an easy-to-find, go-to server, great for tinkering with, especially if you want to try ZFS RAID or something. Listen to his recent podcast here for more info on that:

Hi there,
I took a deep breath and dived deeper into that topic (but still don’t understand what I mentioned in my previous post, may @Stuart_Naylor could explain it).
I have learned a way more things about the architecture and the impact on how easily/reliable/flawless I can deploy and enjoy the hardware.
I list my comments/findings below:

  1. PLATFORM

    thx for mentioning U-Boot | the Universal Boot Loader but it looks that it experiences solid support but how much depends on the developers and how well they keep their kernel aligned with mainline.
    There a lot of efforts in this area, e.g the efforts around KODI (more details).
    If you want to use an attractive alternative OS, such as HypriotOS or Armbian but your chip isn’t supported you may have to build your own as e.g. explained on How To compile a custom Linux kernel for your ARM device but you should take into account

    This is backed by another comment in an ODROID-H2 review.
    Using an x86 allows the initial deployment of the OS and applications to be without a hassle but you have to pay for it, so :timer_clock: vs. :money_with_wings:, :slight_smile:.

  2. RAID 5, 6 or 10
    Although it is said that

    but it saved me once, as one disk in RAID 1 has failed recently.
    Comparing the RAID configuration, I have to RAID 5 is enough for me (I’m aware of performacne and risks), as the risk of two failing disk at exactly the same time is really low in a home user environment. Second important data is backed up.

  3. STORAGE OPTIONS
    Before getting lost in superior transfer rates, you have to take into account your network configuration. Most common is Gigabit Ethernet what theoretically allows 1 Gb/s = 125 MB/s but the reality is different as well illustrated on First Test: How Fast Is Gigabit Supposed To Be, Anyway? - Gigabit Ethernet: Dude, Where’s My Bandwidth? and much more technical on What is the actual maximum throughput on Gigabit Ethernet? - Gigabit Wireless.

    So what options do we have nowadays? There are plenty of options available but properly not all of them are supported by your device.
    A good start is to read Hard Drive Device & Connector Speeds | PC Bottlenecks, what gives you a clue what is possible per specification of an interface or device.
    Unfortunately, it misses embedded memory such as e.MMC and UFS. In the case of e.MMC, it is said that it is to be around the same speed as of SATA HDD or USB SSD. Some reference for e.MMC v5.1 is given in eMMC 5.1 supported - which cards - ODROID. Moreover, earlier this year 5.1A was announced.
    There are some figures about UFS and e.MMC on Samsung’s eUFS 3.0 storage is twice as fast and will scale up to 1TB - TechSpot Forums to put the theoretical speed of embedded and wired storages into relation to each other.

    Before you pin yourself down on one or more storage options you need to think about what is the scope of those as there is not really a standard home user use case for those super fast standards, read
    NVMe NAS Cache: Higher Speed or More Capacity? and Seagate Ironwolf NAS SSD vs Samsung EVO and Pro - NAS Compares for more information.
    Reading

    would encourage using eMMC as it outperforms USB, based on the figures published on
    Hard Drive Device & Connector Speeds | PC Bottlenecks and eMMC 5.1 supported - which cards - ODROID although that doesn’t give information about the reachable IOPS - Wikipedia where still USB SSD is expected to be superior.

    THE FOLLOWING PARAGRAPH NEEDS ADVICE!
    So the only scenario I can think of if HDD raid with an SSD cache. In case of the H2, you may put either SSD or HDD on the PCIe otherwise you might be bottlenecked by the chipset doing all the read and write operations, see the corresponding block diagram.
    May you start with e.MCC as it is cheap and extend by some budget SSD as suggested further above

  4. OS

    another summery supporting your statement group test: NAS distros :slightly_smiling_face:

  5. SBCs
    Properly anyone familiar with SBC would agree that

    if not the safest but it lacks performance compared to other SBCs as mentioned earlier

    ODROID-H2: x86-Bastelcomputer mit 2×Gigabit-Ethernet und HDMI 2.0 | heise online (German to English by Google) add another two brands with dual ethernet

For those who face the same challenges to cope with all these options but getting lost in all these interface standards and their multiple versions as I did at the beginning of my dive, I list the sources what helped me in this regard below:

CONNECTORS
  1. CONNECTORS
    1. SATA
      It is the standard for HDDs und first generation SSDs

      from SSD Interface Comparison: PCI Express vs SATA - Overclock.net - An Overclocking Community

      SATA is the current computer bus interface for connecting a hard drive or SSD, or optical drive to the rest of the computer. SATA replaced the older PATA, offering several advantages over the older interface: reduced cable size and cost (seven conductors instead of 40), native hot swapping, faster data transfer through higher signaling rates, and more efficient transfer through an (optional) I/O queuing protocol. Since its introduction there have been three main revisions doubling bandwidth from the previous and allowing for extra advanced features while maintaining the same physical connector.

      • DATA CONNECTORS

      from Serial ATA - Wikipedia | SATA revision 2.0

      SATA 2 connectors on a computer motherboard, all but two with cables plugged in. Note that there is no visible difference, other than the labeling, between SATA 1, SATA 2, and SATA 3 cables and connectors.
      image

      from Serial ATA - Wikipedia | Data Connector

      A seven-pin SATA data cable (left-angled version of the connector)
      image
      SATA connector on a 3.5-inch hard drive, with data pins on the left and power pins on the right.
      image

      • POWER CONNECTORS

      from Serial ATA - Wikipedia | Standard connector

      image
      A fifteen-pin SATA power connector (this particular connector is missing the orange 3.3 V wire)

      from Serial ATA - Wikipedia | Slimline connector

      The slimline signal connector is identical and compatible with the standard version, while the power connector is reduced to six pins so it supplies only +5 V, and not +12 V or +3.3 V.
      image

      from Serial ATA - Wikipedia | Micro connector

      The micro SATA connector (sometimes called uSATA or ÎźSATA) originated with SATA 2.6, and is intended for 1.8-inch (46 mm) hard disk drives.
      image

    2. eSATA

      from Serial ATA - Wikipedia | eSata

      Standardized in 2004, eSATA ( e standing for external) provides a variant of SATA meant for external connectivity. It uses a more robust connector, longer shielded cables, and stricter (but backward-compatible) electrical standards. The protocol and logical signaling (link/transport layers and above) are identical to internal SATA.

      SATA (left) and eSATA (right) connectors
      image

    3. Mini-SATA (mSATA)

      from Serial ATA - Wikipedia | mSATA

      The physical dimensions of the mSATA connector are identical to those of the PCI Express Mini Card interface, but the interfaces are electrically not compatible; the data signals (TXÂą/RXÂą SATA, PETn0 PETp0 PERn0 PERp0 PCI Express) need a connection to the SATA host controller instead of the PCI Express host controller.

      An mSATA SSD on top of a 2.5-inch SATA drive
      image

    4. PCI SLOTS
      are any kind of hardware, extending the functionality of PCs, they are plugged into a conventional PCI slot.

      from Expansion card - Wikipedia

      In computing, the expansion card, expansion board, adapter card or accessory card is a printed circuit board that can be inserted into an electrical connector, or expansion slot, on a computer motherboard, backplane or riser card to add functionality to a computer system via the expansion bus.

      from Conventional PCI - Wikipedia

      Conventional PCI, often shortened to PCI, is a local computer bus for attaching hardware devices in a computer. PCI is an abbreviation for Peripheral Component Interconnect and is part of the PCI Local Bus standard.

      from Expansion card - Wikipedia | IBM PC and descendants

      Intel launched their PCI bus chipsets along with the P5-based Pentium CPUs in 1993. The PCI bus was introduced in 1991 as a replacement for ISA. The standard (now at version 3.0) is found on PC motherboards to this day. The PCI standard supports bus bridging: as many as ten daisy chained PCI buses have been tested. Cardbus, using the PCMCIA connector, is a PCI format that attaches peripherals to the Host PCI Bus via PCI to PCI Bridge. Cardbus is being supplanted by ExpressCard format.

      from Conventional PCI - Wikipedia

      Three 5-volt 32-bit PCI expansion slots on a motherboard (PC bracket on left side)
      image

    5. PCI Express (PCIEe) SLOTS

      from PCI Express - Wikipedia

      PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe or PCI-e, is a high-speed serial computer expansion bus standard, designed to replace the older PCI, PCI-X and AGP bus standards. It is the common motherboard interface for personal computers’ graphics cards, hard drives, SSDs, Wi-Fi and Ethernet hardware connections.

      from SSD Interface Comparison: PCI Express vs SATA - Overclock.net - An Overclocking Community

      Peripheral Component Interconnect Express, or PCIe, is a physical interconnect for motherboard expansion. Normally this is the connector slot you plug your graphics card, network card, sound card, or for storage purposes, a RAID card into. PCIe was designed to replace the older PCI, PCI-X, and AGP bus standards and to allow for more flexibility for expansion.
      Improvements include higher maximum bandwidth, lower I/O pin count and smaller physical footprint, better performance-scaling, more detailed error detection and reporting, and hot-plugging.
      The physical connector on the motherboard typically allows for up to 16 lanes for data transfer. A PCIe device that is an x4 device can fit into a PCIe x4 slot up to an x16 slot and still function. PCIe 1.0 allowed for 250MB/s per lane, PCIe 2.0 allows for 500MB/s per lane and the newest PCIe 3.0 allows for 1GB/s per lane.

      However, in real world throughput PCIe 2.0 allows for around 400MB/s due to its 8b/10b encoding scheme, while PCIe 3.0 allows for 985MB/s due to its improved 128b/130b encoding scheme.
      With that, by multiplying lane speed by the number of lanes gives us a theoretical maximum speed for that slot. Cards are generally backward compatible and the PCIe is full-duplex (data goes both ways at one time, unlike SATA).

      a more detailed tabel on bandwith from PCI Express - Wikipedia | History and revisions

      Looses on bandwith due to line code are given in detail on PCIe - PCI Express (1.1 / 2.0 / 3.0 / 4.0 / 5.0) (German to English by Google)

      from Conventional PCI - Wikipedia | History

      A motherboard with two 32-bit PCI slots and two sizes of PCI Express slots
      image

    6. M.2
      M.2 is just form factor it doesn’t necessarily what interface is used although it is designed for the newest interfaces.

      from M.2 - Wikipedia

      M.2, formerly known as the Next Generation Form Factor (NGFF), is a specification for internally mounted computer expansion cards and associated connectors. It replaces the mSATA standard, which uses the PCI Express Mini Card physical card layout and connectors.
      M.2 can run an SSD over SATA (different than mSATA) or PCIe. The difference between M.2 SATA and M.2 PCIe can be discerned by their key notches…
      M.2’s more flexible physical specification allows different module widths and lengths, and, paired with the availability of more advanced interfacing features, makes the M.2 more suitable than mSATA for solid-state storage applications in general and particularly for the use in small devices such as ultrabooks or tablets.

      from M.2 - Wikipedia | Features

      Buses exposed through the M.2 connector are PCI Express 3.0, Serial ATA (SATA) 3.0 and USB 3.0, which is backward compatible with USB 2.0. As a result, M.2 modules can integrate multiple functions, including the following device classes: Wi-Fi, Bluetooth, satellite navigation, near field communication (NFC), digital radio, WiGig, wireless WAN (WWAN), and solid-state drives (SSDs).[6]The SATA revision 3.2 specification, in its gold revision as of August 2013, standardizes the M.2 as a new format for storage devices and specifies its hardware layout.[1]:12[7]

      The M.2 specification provides up to four PCI Express lanes and one logical SATA 3.0 (6 Gbit/s) port, and exposes them through the same connector so both PCI Express and SATA storage devices may exist in the form of M.2 modules. Exposed PCI Express lanes provide a pure PCI Express connection between the host and storage device, with no additional layers of bus abstraction.[8] PCI-SIG M.2 specification, in its revision 1.0 as of December 2013, provides detailed M.2 specifications

      from SSD Interface Comparison: PCI Express vs SATA - Overclock.net - An Overclocking Community

      The M.2 standard is an improved revision of the mSATA connector design. It allows for more flexibility in the manufacturing of not only SSDs, Wi-Fi, Bluetooth, satellite navigation, near field communication (NFC), digital radio, Wireless Gigabit Alliance (WiGig), and wireless WAN (WWAN).
      On the consumer end, SSDs especially benefit due to the ability to have double the storage capacity than that of an equivalent mSATA device. Furthermore, having a smaller and more flexible physical specification, together with more advanced features, the M.2 is more suitable for solid-state storage applications in general. The form factor supports one SATA port at up to 6Gb/s or 4 PCIe 3.0 lanes at 4GB/s.

      from Overview of M.2 SSDs | Introduction

      However, not everything is quite so rosy with M.2. Unlike SATA drives - where every drive is the same physical size and uses the same cables - M.2 allows for a variety of physical dimensions, connectors, and even multiple logical interfaces. To help our customers understand the nuances of M.2 drives we decided to publish this overview of the current M.2 specifications.

      Physical size and connectors
      Unlike SATA drives, M.2 allows for a variety of physical sizes. Right now, all M.2 drives that are intended for use in PCs are 22mm wide, but they come in a variety of lengths. To make it easier to tell which drives can be mounted on a motherboard or PCI-E card, the width and height of both the drive and slot is usually expressed in a single number that combines the two dimensions. For example, a drive that is 22mm wide and 80mm long would be listed as being a 2280 (22mm x 80mm). Common lengths for M.2 drives and mounting right now are 30mm (2230), 42mm (2242), 60mm (2260), 80mm (2280), and 110mm (22110).

      In addition, there are two types of sockets for M.2: one with a “B key” and one with a “M key”.
      M.2 socket types
      image
      B+M keyed drive (left) and a M keyed drive (right)
      image

      M.2 to PCI-E x4 adapter that can take multiple lengths of drives
      The different keys are what indicated the maximum number of PCI-E lanes the socket can use and physically limit what drives can be installed into the socket… A “B key” can utilize up to two PCI-E lanes while a “M key” can use up to four PCI-E lanes.
      Right now, however, the majority of M.2 sockets use a “M key” even if the socket only uses two PCI-E lanes. As for the drives, most PCI-E x2 drives are keyed for B+M (so they can work with any socket) and PCI-E x4 drives are keyed just for M.

      This is confusing at first since it is much more complicated than SATA, but all of this information should be listed in the specs of both the drive and motherboard/PCI-E card. For example, the ASUS Z97-A lists the M.2 slot as “M.2 Socket 3, with M Key, type 2260/2280” so it supports drives that are 22mm wide and either 60mm or 80mm long with a M key.

      M.2 logical interfaces
      In addition to the different physical sizes, M.2 drives are further complicated by the fact that different M.2 drives connect to the system through three different kinds of logical interfaces.
      Currently, M.2 drives can connect through either the SATA controller or through the PCI-E bus in either x2 or x4 mode. The nice thing is that all M.2 drives (at least at the time of this article) are all backwards compatible with SATA so any M.2 drive should work in a M.2 socket that uses SATA - although they will be limited to SATA speeds.
      At the same time, not all M.2 sockets will be compatible with both SATA and PCI-E - some are either one or the other. So if you try to use a PCI-E drive in a SATA-only M.2 slot (or vice-versa) it will not function correctly.

      M.2 PCI-E drives should really be used in a socket that supports the same number of PCI-E lanes as the drive for maximum performance, although any M.2 PCI-E drive will work in either a PCI-E x2 or PCI-E x4 socket provided they have the same key.
      However, if you install a PCI-E x4 drive into a PCI-E x2 socket it will be limited to PCI-E x2 speeds. At the same time, installing a PCI-E x2 drive into a PCI-E x4 socket will not give you any better performance than installing it into a PCI-E x2 socket.

      Basically, what it comes down to is that even if a M.2 drive physically fits into a M.2 socket, you also need to make sure that the M.2 socket supports the type of M.2 drive you have. In truth, the only time a M.2 drive shouldn’t work at all even through the keying matches is if you try to use a M.2 SATA drive in a M.2 PCI-E only socket.

    7. USB

      from How to achieve the best transfer speeds with external drives| AKiTiO

      Almost every computer has a USB port, making USB the ideal interface for drives that are
      used on more than just your own computer. For single HDDs, even the first generation of USB 3.1 (USB 3.0) is fast enough and will not limit your transfer rate.
      For SSDs, it’s best to use the second generation of USB 3.1 at 10Gbps but for multiple drives, the transfer rate will be limited to around 700-800 MB/s and that’s with the faster USB 3.1 Gen 2 interface.

    8. THUNDERBOLT

      from How to achieve the best transfer speeds with external drives| AKiTiO

      With Thunderbolt 3, currently the latest generation of the Thunderbolt interface, you get plenty of bandwidth even for multiple drives and when daisy chaining additional Thunderbolt drives. The bottleneck of the Thunderbolt 3 interface is at around 2750 MB/s but for now, only certain NVMe SSDs can reach these kind of speeds, so in most cases, the transfer rate will not be limited.

      With Thunderbolt 2, the bottleneck is at around 1375 MB/s. This kind of bandwidth is ideal for up to 4 SATA-III drives but it’s not fast enough for an NVMe based SSD and even four SATA-III SSDs can be limited by this interface.

      The first generation of Thunderbolt is similar to the USB 3.1 Gen 2 interface. The transfer rate will be limited to around 700-800 MB/s, which is ideal for multiple HDDs or 1-2 SSDs but not for more than 2 drives.

1 Like
INTERFACE (PROTOCOLS)
  1. INTERFACE (PROTOCOLS)
    1. SATA

      SATA Express - Wikipedia | History:

      The Serial ATA (SATA) interface was designed primarily for interfacing with hard disk drives (HDDs), doubling its native speed with each major revision: maximum SATA transfer speeds went from 1.5 Gbit/s in SATA 1.0 (standardized in 2003), through 3 Gbit/s in SATA 2.0 (standardized in 2004), to 6 Gbit/s as provided by SATA 3.0 (standardized in 2009).

    2. SATA express (SATAe)

      SATA Express - Wikipedia | Features:

      SATA Express interface supports both PCI Express and SATA storage devices by exposing two PCI Express 2.0 or 3.0 lanes and two SATA 3.0 (6 Gbit/s) ports through the same host-side SATA Express connector (but not both at the same time).
      …
      The choice of PCI Express also enables scaling up the performance of SATA Express interface by using multiple lanes and different versions of PCI Express.
      In more detail, using two PCI Express 2.0 lanes provides a total bandwidth of 1 GB/s (2 × 5 GT/s raw data rate and 8b/10b encoding, equating to effective 1000 MB/s), while using two PCI Express 3.0 lanes provides close to 2 GB/s (2 × 8 GT/s raw data rate and 128b/130b encoding, equating to effective 1969 MB/s).[3][7]
      In comparison, the 6 Gbit/s raw bandwidth of SATA 3.0 equates effectively to 0.6 GB/s due to the overhead introduced by 8b/10b encoding.

      There are three options available for the logical device interfaces and command sets used for interfacing with storage devices connected to a SATA Express controller:[6][8]

      • Legacy SATA
        Used for backward compatibility with legacy SATA devices, and interfaced through the AHCI driver and legacy SATA 3.0 (6 Gbit/s) ports provided by a SATA Express controller.

      • PCI Express using AHCI
        Used for PCI Express SSDs and interfaced through the AHCI driver and provided PCI Express lanes, providing backward compatibility with widespread SATA support in operating systems at the cost of not delivering optimal performance by using AHCI for accessing PCI Express SSDs.
        AHCI was developed back at the time when the purpose of a host bus adapter (HBA) in a system was to connect the CPU/memory subsystem with a much slower storage subsystem based on rotating magnetic media; as a result, AHCI has some inherent inefficiencies when applied to SSD devices, which behave much more like DRAM than like spinning media.

      • PCI Express using NVMe
        Used for PCI Express SSDs and interfaced through the NVMe driver and provided PCI Express lanes, as a high-performance and scalable host controller interface designed and optimized especially for interfacing with PCI Express SSDs.
        NVMe has been designed from the ground up, capitalizing on the low latency and parallelism of PCI Express SSDs, and complementing the parallelism of contemporary CPUs, platforms and applications.
        At a high level, primary advantages of NVMe over AHCI relate to NVMe’s ability to exploit parallelism in host hardware and software, based on its design advantages that include data transfers with fewer stages, greater depth of command queues, and more efficient interrupt processing.

      A high-level overview of the SATA Express software architecture, which supports both legacy SATA and PCI Express storage devices, with AHCI and NVMe as the logical device interfaces

      SATA Express host-side connector, formally known as the “host plug”, accepts both SATA Express and legacy standard SATA data cables
      image

      from SSD Interface Comparison: PCI Express vs SATA - Overclock.net - An Overclocking Community

      SATA Express, initially standardized in the SATA 3.2 specification, is a newer computer bus interface that supports either SATA or PCIe storage devices. The host connector is backward compatible with the standard 3.5-inch SATA data connector, while also providing multiple PCI Express lanes as a pure PCI Express connection to the storage device.
      The physical connector will allow up to two legacy SATA devices to be connected if a SATA Express device is not used. The industry is moving forward with SATA Express now rather than SATA 12Gb/s. SATA Express was born because it was concluded that SATA 12Gb/s would require too many changes, be more costly and have higher power consumption than desirable.
      image
      For example, 2 lanes of PCIe 3.0 offers 3.3x the performance of SATA 6Gb/s with only 4% increase in power. (2 × PCIe 3.0 lanes with 128b/130b encoding, results in 1969 MB/s bandwidth) 2 lanes of PCIe 3.0 would be 1.6x higher performance and would consume less power than a hypothetical SATA 12Gb/s.

      SATA Express is not widely implemented at this time so I am not going to go into much more detail about it. However, keep in mind as of now SATA express SSDs will normally be limited to the chipset and implementation limitations in terms of speed when compared to the potential of true PCIe SSDs.

    3. NVMe

      NVM Express (NVMe) Wikipedia:

      NVM Express ( NVMe ) or Non-Volatile Memory Host Controller Interface Specification ( NVMHCIS ) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express(PCIe) bus. The acronym NVM stands for non-volatile memory , which is often NAND flash memory that comes in several physical form factors, including solid-state drives (SSDs), PCI Express (PCIe) add-in cards, M.2 cards, and other forms.
      NVM Express, as a logical device interface, has been designed to capitalize on the low latency and internal parallelism of solid-state storage devices.[1]

      By its design, NVM Express allows host hardware and software to fully exploit the levels of parallelism possible in modern SSDs. As a result, NVM Express reduces I/O overhead and brings various performance improvements relative to previous logical-device interfaces, including multiple long command queues, and reduced latency.
      (The previous interface protocols were developed for use with far slower hard disk drives (HDD) where a very lengthy delay (relative to CPU operations) exists between a request and data transfer, where data speeds are much slower than RAM speeds, and where disk rotation and seek time give rise to further optimization requirements.)

      from SSD Interface Comparison: PCI Express vs SATA - Overclock.net - An Overclocking Community

      NVMe or Non-Volatile Memory Host Controller Interface Specification (NVMHCI) is a new and backward-compatible interface specification for solid state drives. It is like that of the SATA modes IDE, AHCI, and RAID, but specifically for PCIe SSDs. It is to support either SATA (I believe specifically SATA Express) or PCI Express storage devices.
      As you know, most SSDs we use connect via SATA, but that interface was made for mechanical hard drives and lags behind due to SSD’s design being more DRAM like. AHCI has a benefit of compatibility with legacy software. NVMe is much more efficient than AHCI and cuts out lot of overhead because of it. NVMe has the ability to take more advantage of lower latency and parallelism of CPUs, platforms and applications to improve performance

DRIVES
  1. DRIVES
    1. HDD
      well known, for a refresh read a bit older blog entry SSD Vs HDD - Performance comparison of Solid State Drives and Hard Drives and SSD vs. HDD Speed | Enterprise Storage forum

    2. SSD
      well known, for a refresh see above but keep mind there are different quality levels of SSDs, see What disk types are available in Azure? | Microsoft Docs

    3. NVMe-SSD

      from NVMe vs. SATA: It’s Time for NAND Flash in the Fast Lane

      Flash at PCIe Speeds
      One great feature of PCIe is its direct connection to the CPU. This streamlines the storage device stack, completely eliminating much of the complexity and layers present in SATA protocol stacks. As a result, NVMe delivers 2X the performance of SAS 12 Gb/s, and 4-6X of SATA 6 Gb/s in random workloads. For sequential workloads, NVMe delivers 2X the performance of SAS 12 Gb/s, and 4X of SATA 6 Gb/s.
      (Source: “All About M.2 SSDs,” Storage Networking Industry Association [SNIA]. 2014.)

      By taking advantage of PCIe, NVMe reduces latency, enables faster access, and delivers higher Input/Output per Second (IOPS) compared with other interfaces designed for mechanical storage devices. NVMe also offers performance across multiple cores for quick access to critical data, scalability for current and future performance and support for standard security protocols.

    4. NAS DRIVES

      from Why choose NAS drives over desktop drives for your NAS? | synology blog

      Continuous operation and RAID configuration are what makes NAS HDDs stand out from desktop HDDs. A NAS HDD is designed to run for weeks on end, while a desktop HDD can only read and write data for hours at a time. A NAS HDD is also built specifically for RAID setup. By combining multiple drives into one single logical unit, RAID configurations provide data redundancy, thus protecting data against drive failures.

      They come mainly in 3.5" formfactor, 2.5" stops at 1TB, see the main vendors

      Although Seagate introduces NAS SSD earlier this year but the aren’t cheap:
      IronWolf 110 Network Attached Storage (NAS) SSD | Seagate

      The figures on the datasheet are quite reliable. I found a source with details some reasonable results, Vergleich: Die beste Festplatte fĂźrs NAS von 4 bis 12 TByte | TechStage (German to English by Google)

    5. FLASH MEMORY, eMMC and UFS

      from eMMC vs. SSD storage: What’s the difference? | Windows Central

      eMMC storage is mostly found in phones, as well as compact, budget laptops or tablets. The “embedded” part of the name comes from the fact that the storage is usually soldered directly onto the device’s motherboard. eMMC storage consists of NAND flash memory — the same stuff you’ll find in USB thumb drives, SD cards, and solid-state drives (SSD) — which doesn’t require power to retain data.

      Despite both containing a type of NAND memory, SSDs and eMMC storage are quite different.

      There are vendors like HARDKERNEL offering swapable flash meory cards as pluggable moduls what can be read by an SD card reader

    6. compare speeds
      First of all you need to think what do you need, consider

      and read

      A good start is to read Hard Drive Device & Connector Speeds | PC Bottlenecks, what gives you a clue what is possible per common specification of an interface or device.
      In addition, compare these results with Odroid H2 benchmarks.

      Although the post SSD Interface Comparison: PCI Express vs SATA - Overclock.net - An Overclocking Community is not the youngest the use case descriptions are still applicable.

      If you are interested in eMMC, you may like to read eMMC vs. SSD storage: What’s the difference? | Windows Central

      Last but not least if you are interseted in theretical figures read the ATP blog

      BUT keep in mind when IOPS - Wikipedia are mentioned, they are not free of critques, see

1 Like

Looks like the perfect full NAS solution:

1 Like

Hello,

I am considering buying arm board to run nextcloud pi. Since 2 year i have put nextcloud on my website hosting service. Nextcloud work great and is more and more pwerfull. I want to host my nextcloud at home to get more space and be able to use advanced functions.
So i read the docs, i test it on my PC with docker, i search for board, check the benchmark on board and read forum post and github issues.

My critical question is :
Raspberry Pi 4 (RPi4) VS other (Odroid XU4 for example)

  • First thougth : Don’t take Raspberry 3B+. This is based on the article : should i use a raspberry pi. I then understand the main drawback for Raspberry was no USB3 support, perfomance compared to other SBC, overheating.
  • Then I learned about the specification of the new Raspberry 4. It made it quite equivalent with other SBC proposed. I have seen that it is compatible (article here). And @esbeeb said before in the conversation it seems to be a decent option.

In the end I see only small differences between Raspberry Pi4 and Odroid XU4 or Rock64. For me the Raspberry bigger community could be an advantage to have more tutorial/help found on the web, more long term “support” and to have other use of the board possible.
So my question :

  • What are the reason not to choose raspberry 4 for non-technical people ?

Then I would be happy if i can then help to update the doc should i use a raspberry pi

Hi, I wrote that article; you are right to say it needs an update. Glad to see it has helped someone. Work on this is handled through Telegram.

With all the extras required (especially cooling) the RPI4 ends up actually quite expensive.

If I would set up a simple home Nextcloud on a SBC right now, I would probably use the just released Odroid-C4:
https://www.hardkernel.com/shop/odroid-c4/

Reasons: super low power-consuption with no heat problems even under sustained load. ARM8 crypto extensions, eMMC drive, fast RAM and sufficient and flexible power input to run multiple external USB drives.

2 Likes

@just : I have joined the telegram channel. if i could help.

@Krischan thanks for sharing info on this new board. If i have understand well i see that it is better but not game changer compared to pi 4 (but it is a huge change compared to the pi 3b+).

I think buying a raspberry 4 because it is a one of the good board to use and then i may use it easier if I stop this nextcloud project for any reason. I am may be wrong but i think that raspberry could have easier other use (as beginer) or that it could be selled easier to other people that may have use of it.

Actually, in this case the Odroid wins. Don’t get me wrong, the Pi 4 is an excellent device, but… NextcloudPi and tools like Docker are actually directly integrated into the setup menu of all Odroid devices thanks to Armbian! Super awesome and could not be easier to get started.

The C4 is sweet for multiple reasons:

Much better power consumption at only 2A!

Barrel jack or USB power.

4 ports at USB 3.0 vs the 2 ports

Same price as Pi4.

No wifi or bluetooth on-board.
emmc slot + micro-sd

This is a beast! Also, the older Odroid HC1 (2.5") or HC2 (3.5") will directly connect SATA drives so you’ll be moving your data on big drives at full gigabit ethernet speeds!

Hi guys,
I am currently running my NC on a QNAP NAS with docker. But since this system is very power consuming I was thinking about changing to a less power needing setup.
I am using my NC with 5 users mainly for upload and sharing of pictures through the clients (Windows and Android). I am planing to install further apps like Collabora or OnlyOffice, but there will not be a lot of simultaneous work of different users in these apps.

As a new setup I was thinking about to buy an Odroid-C4, since it does not use a lot of energy and still seems to have enough computing power. In addition I would buy me a WD Red SA500 SSD 2.5" harddrive. Is it possible to power this harddrive only through the standard Odroid power source (12V, 2 A Power supply; with USB-SATA-Adapter)? This harddrive is also available as M.2, does this make more sense, since it is smaller? What is the problem there? The Odroid-C4 only has USB-connectors, I will therefore need an adapter cable. Is there something special I need to consider if I buy one?
Do you think this is a reasonable setup? Or do you think I need to reconsider something? Or would you recommend me some other products which would even speed up my setup? In my new setup I will use my QNAP NAS for the backup and therefore only start it at certain times.

Thanks for your comments / thoughts.

Yes that sounds feasible more or less. Buy the USB to SATA adapter directly from Hardkernel (Odroid makers) to be sure it works with the Odroid-C4.

The only problem is that OnlyOffice only works on x86 CPUs (for now) and Collabora might be a bit slow on a Odroid-C4 (not sure, you would need to test that).

Edit: You could also get the new Odroid H2+, more expensive but x86, 2.5gbit ethernet and a M.2 NVMe storage connector for a SSD:
https://www.hardkernel.com/shop/odroid-h2plus/

Thank you for your answer. I again had a look into it.
Indeed, the Odroid H2+ looks interesting, but is more expensive. Do I understand correctly, that I also need to buy the RAM seperatly for the H2+? The second drawback is, that it needs more power, but as far as I understand it will be easily possible to speed it up (add further RAM) at a later point.
Do I have other limitations with the Ordoid-C4, because it does not have x86 CPU?
Another option will be a Raspberry Pi 4… Hard decision to take :wink:

Edit: Just saw that the Raspberry Pi also is based on an ARM cpu. Therefore, I don’t see an advantage in the Raspberry Pi and only need to decide between Ordoid-C4 and Odroid H2+.

Yes RAM is a separate purchase for the H2+ (regular Laptop DDR4 ram I think), which brings the price close to some other NUC like PCs to be honest.

ARM CPUs are mostly well supported for server software (less so for Desktop software and games), but you might have to compile some software yourself. However since the RasberryPI is so popular, Armbian and other ARM focused GNU/Linux distributions have a wide range of software precompiled for ARM.

For a home server kind of setup, IMHO there is really no reason to chose the RasberryPI4 over the Odroid-C4. But if for some reason you end up using it for other purposes (like a media center) then the RPI4 has a bit of an advantage as software support in general is a bit better.