Not that fast hdd's and 250mbps internet connection - i/o waits

Hello.

I just setup My first home server, before this I had setup only one small business server so I’m new in server things and Linux in general. I have setup Apache2 for home pages, Nextcloud for cloud and I’m planning mail server too. I installed Nextcloud in manual mode, so far so good, service works. As server will be used by around 10 people whit various service combinations I bumped up internet connection to 250mbps as I had “only” 100mbps so friends/clients can have a good experience.

Server hardware that I use is My old computer, from 2010. Its Phenom x2 555 unlocked to B55 whit 4 cores and overclocked to 3,6gHz. Motherboard is GA-MA770T-UD3P whit 8Gb 1600 ram. Hard dives that I have are: TOSHIBA MQ01ABD075 (laptop hdd) 750TB 8mb cache 5400rpm, and ST500DM002-1BD142 500TB 16mb cache 7200rpm whit WD5000AZRX-00A3KB0 500TB 64mb cache intellipower 5400rpm in RAID1. Motherboard have only 3gb/s sata ports, so there is no surprise that I have i/o waits problems from time to time. When I stress test server whit large files in Nextcloud or via ftp i got i/o waits around 40% and warning in glances. Speeds are around 28mb/s download and 25mb/s upload, maximum should be 31mb/s. That speed I achieve from other servers.

My question is what I can upgrade? Logic tells Me that hdd’s but is there a point if motherboard have only 3Gb/s speeds. SSD will help whit a lot of iops, but they are expensive. I’m panning to upgrade to x2 4TB NAS drives and that much storage will be very expensive in SSDs. Will system drive replacing to SSD help this problem a bit? That what I can buy. Maybe I can use SSD as cache disk only?

RAM is probably the most important. Perhaps you haven’t optimized the usage yet, so there could be some potential without new investments. Especially the database can be optimized a lot and the i/o-operations can drop. To reduce the load on the database, use redis as filelocking-cache, it reduces the load on the database a lot.

With iotop you can check which process uses all the write/read-operations. It’s often the database.

I think that already looks quite good. With iperf and other tools, you can check the real connection speed. You can then check with sftp or other tools, where you can see the impact of writing to a disk, and Nextcloud will still be a bit slower since it does a lot of database operations on top.

I have setup caches (redis as filelocking-cache for example) as I read documentation and I have setup those recommendations. Speedtest-cli shows full 250mbps speeds, so this is hardware or OS limitation. It is quite a shame that around 10mbs goes to waste.

Thanks for Your answer.

Make sure that the database is on a separate physical disk. When sql is the bottle neck the rest will be slow too.

Try adjusting InnoDB to tune it. Use mysqltuner script.

This is a sample config I’m trying on an old 2 core 1.6ghz AMD e-350.

Warning. Do not use O_DSYNC and doublewrite=0 unless you are using a journaling filsystem like btrfs…

[mysqld]

innodb_buffer_pool_size         = 2G
innodb_buffer_pool_instances    = 2
innodb_file_per_table           = 1
innodb_compression_algorithm    = lz4
innodb_compression_default      = 0
innodb_strict_mode              = 1
innodb-doublewrite              = 0
innodb_flush_method             = O_DSYNC

query_cache_type        = 1
query_cache_size        = 256M
query_cache_limit       = 16M
query_cache_strip_comments  = 1

join_buffer_size        = 16M

skip-name-resolve       = 1

My db is on separate drive. I have system hdd, and two data hdd’s in RAID1.

I’ll check those configs out, tnx.