Mariadb process killed by OOM killer

Hi,

Iā€™m nearly done (I think) installing my Nextcloud. The system is up and running, except that mariadb doesnā€™t seem to run for more than about 15 minutes without running out of memory.

  • nextcloud 27.0.2
  • mariadb 10.11.3-1
  • apache 2.4.57-2
  • Debian Bookworm (12).

When I start a sync, it runs fine for a few minutes, then stops with server errors ("No connection ā€¦ Server replied ā€œ500 Internal Server Errorā€). Tracking them down, I find the culprit is mysql has stopped:

Aug 19 04:15:58 oleracea systemd[1]: mariadb.service: A process of this unit has been killed by the OOM killer.
Aug 19 04:15:58 oleracea systemd[1]: mariadb.service: Main process exited, code=killed, status=9/KILL
Aug 19 04:15:58 oleracea systemd[1]: mariadb.service: Failed with result ā€˜oom-killā€™.
Aug 19 04:15:58 oleracea systemd[1]: mariadb.service: Consumed 17.205s CPU time.

I checked the manual, and added the following to my php config:

root@oleracea:/etc/php/8.2/mods-available# cat mysqli.ini

; configuration for php mysql module
; priority=20
extension=mysqli.so

[mysql]
mysql.allow_local_infile=On
mysql.allow_persistent=On
mysql.cache_size=2000
mysql.max_persistent=-1
mysql.max_links=-1
mysql.default_port=
mysql.default_socket=/var/lib/mysql/mysql.sock # Debian squeeze: /var/run/mysqld/mysqld.sock
mysql.default_host=
mysql.default_user=
mysql.default_password=
mysql.connect_timeout=60
mysql.trace_mode=Off

And my sql config:

/etc/mysql# cat my.cnf

ā€¦

[client-server]
#Port or socket location where to connect
#port = 3306
socket = /run/mysqld/mysqld.sock

#Import all .cnf files from configuration directory
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/

##added from nextcloud documentation:

[server]
skip_name_resolve = 1
innodb_buffer_pool_size = 128M
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 32M
innodb_max_dirty_pages_pct = 90
query_cache_type = 1
query_cache_limit = 2M
query_cache_min_res_unit = 2k
query_cache_size = 64M
tmp_table_size= 64M
max_heap_table_size= 64M
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1

[client]
default-character-set = utf8mb4

[mysqld]
character_set_server = utf8mb4
collation_server = utf8mb4_general_ci
transaction_isolation = READ-COMMITTED
binlog_format = ROW
innodb_file_per_table=1

After restarting mariadb with this config details, the problem persists. What else can I do to keep mysql from running out of memory?

Thanks!

Tyler

update: after hours reading forums, the nextcloud docs, and the mariadb docs, Iā€™ve been unable to solve the problem. I tried reducing the php memory limits to 512MB (I have 1GB available on the server). Also some other tweaks in the mysql config, as suggested by the mariadb docs.

When all else failed, I set up ps to track memory usage. When I start mariadb, it goes to about 220MB RAM, with Apache taking about the same, spread over several processes. When mariadb crashes, the Apache2 memory usage jumps to about 400MB. No errors in the apache logs.

Iā€™ve increased the RAM to 2GB, and now the system seems to work ok. This doubles my hosting costs though, so if anyone can suggest how to continue my troubleshooting, please let me know!

One other thought: I donā€™t have swap set up. I understand swap isnā€™t good for mysql, but maybe it would give Apache2 something to use when total memory goes too high?

ps: another thought: from the nextcloud manual:

Nextcloud needs a minimum of 128MB RAM per process, and we recommend a minimum of 512MB RAM per process.

I didnā€™t notice this was 512 MB per process. How many processes does NC use? Can I limit that to a single process? Iā€™m the only user of this system, would one process be sufficient?

Keep in mind that the memory requirements are for the Nextcloud app running within your app/web server. If youā€™re running the database on the same host than you also need to meet the memory requirements for whatever database server youā€™ve chosen to run as well. Sames goes for Redis (if using) and your reverse proxy (if using).

Yes, at least set up swap. Swap is never good performance-wise for mysql or anything really, but itā€™s better then getting an OOM error and crashing hard. It also gives you the opportunity to at least run things and start collecting data to see what you need RAM-wise to optimize things and/or what parameters to maybe adjust.

It varies depending on a lot of variables unfortunately. Fewer processes means fewer simultaneous transactions can be handled (either resulting in performance issues for clients or service unavailability). I canā€™t imagine it working well with one process, but running with 5-10 (average) is probably fairly common for small deployments.

If youā€™re using mod_php (I believe you are), you can see what your apache2 processes are consuming. Theyā€™re typically ~90M/each from what Iā€™m seeing today. And you might have 5-10 of them in a typical default Apache installation when idle (RSS column):

www-data@134e78f9c952:~/html$ ps auxww
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.1 405688 46596 ?        Ss   Aug18   0:18 apache2 -DFOREGROUND
www-data      46  0.0  0.2 413052 83368 ?        S    Aug18   0:02 apache2 -DFOREGROUND
www-data      47  0.0  0.2 413340 84240 ?        S    Aug18   0:03 apache2 -DFOREGROUND
www-data      48  0.0  0.2 411152 88540 ?        S    Aug18   0:02 apache2 -DFOREGROUND
www-data      49  0.0  0.2 487144 84584 ?        S    Aug18   0:02 apache2 -DFOREGROUND
www-data      50  0.0  0.2 413584 88384 ?        S    Aug18   0:03 apache2 -DFOREGROUND
www-data      51  0.0  0.2 408876 78372 ?        S    Aug18   0:02 apache2 -DFOREGROUND
www-data      52  0.0  0.2 413036 81100 ?        S    Aug18   0:02 apache2 -DFOREGROUND
www-data      53  0.0  0.2 484848 82308 ?        S    Aug18   0:02 apache2 -DFOREGROUND
www-data      54  0.0  0.3 497268 102460 ?       S    Aug18   0:12 apache2 -DFOREGROUND
www-data      56  0.0  0.2 489028 88260 ?        S    Aug18   0:03 apache2 -DFOREGROUND

Hereā€™s the matching idle mariadb memory usage from the same test environment:

www-data@437978e46447:/$ ps auxww 
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
mysql          1  0.1  0.5 2982548 176212 ?      Ssl  Aug18   7:37 mariadbd --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW --socket=/usr/local/run/mysql.sock --skip-networking=1

Most of your memory consumption is going to come from:

  • Your DB server
  • Your specific DB server configuration
  • Your web/app server software
  • Your specific web/app server configuration[1]

[1] This is a typical default Apache configuration (i.e. what Debian does by default in /etc/apache2/mods-enable/mpm_prefork.conf)

# prefork MPM
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# MaxRequestWorkers: maximum number of server processes allowed to start
# MaxConnectionsPerChild: maximum number of requests a server process serves

StartServers            5
MinSpareServers         5
MaxSpareServers         10
MaxRequestWorkers       150
MaxConnectionsPerChild  0
2 Likes

You might be able to reduce your memory requirements materially by using sqlite instead of mariadb/mysql (see the manual - there are tradeoffs but plenty of people do it - or start out using it). I havenā€™t benchmarked memory usage with sqlite lately so Iā€™m not sure what it uses for certain.