Nextcloud docker changing mysql variables

I’ve installed Nextcloud as a docker image on my raspberry pi 3b+ with an ssd.
And it is behind traefik (reverse proxy; it is a local storage server for now).
docker-compose files

I’ve got the following Problem:
Uploading many small files is very slow, and based on my research it is because of mysql writing to the database every transaction. What I have to do is change (and probably some others to)

innodb_flush_log_at_trx_commit = 2

which should be inside /etc/mysql/my.cnf .

So I tried changing it by:

mysql -u root -p

then

MariaDB [(none)]> show variables like 'innodb_flush_log%';
+--------------------------------+-------+
| Variable_name                  | Value |
+--------------------------------+-------+
| innodb_flush_log_at_timeout    | 1     |
| innodb_flush_log_at_trx_commit | 1     |
+--------------------------------+-------+
2 rows in set (0.00 sec)

changing the value:

MariaDB [(none)]> set global innodb_flush_log_at_trx_commit = 2;
Query OK, 0 rows affected (0.00 sec)

then checking:

MariaDB [(none)]> show variables like 'innodb_flush_log%';
+--------------------------------+-------+
| Variable_name                  | Value |
+--------------------------------+-------+
| innodb_flush_log_at_timeout    | 1     |
| innodb_flush_log_at_trx_commit | 2     |
+--------------------------------+-------+
2 rows in set (0.00 sec)

BUT after restarting the db container I get:

MariaDB [(none)]> show variables like 'innodb_flush_log%';
+--------------------------------+-------+
| Variable_name                  | Value |
+--------------------------------+-------+
| innodb_flush_log_at_timeout    | 1     |
| innodb_flush_log_at_trx_commit | 1     |
+--------------------------------+-------+
2 rows in set (0.00 sec)

So how do I change those variables inside the container?
I’ve also found the Tuning Primer script as referred by some on the forum (Example), but how should I execute it inside the container? How would I even get it?
Also should I do some sort of redis? Just one user :wink: .

.

.

Side question:
When

innodb_flush_log_at_trx_commit = 2

and an outage:

Because the flush to disk operation only occurs approximately once per second, you can lose up to a second of transactions in an operating system crash or a power outage.
[…]

occurs, where lies the problem? Is there a database entry for a file, but no file, or is there no database entry but a file? Couldn’t something like that be fixed by some sort of rescan, like occ maintenance:repair or something like that, with rescheduled syncing?

nope. you could change it in the container but it’s not persistent.

so you create your own ncp-mysql.cnf file containing innodb_flush_log_at_trx_commit = 2 and add here

a line

    -  /path/to/ncp-mysql.cnf:/etc/mysql/conf.d/ncp-mysql.cnf:ro

(and make sure there is a include /etc/mysql/conf.d/* statment in the my.cnf file.)

you can also copy and modify the /etc/mysql/my.cnf file and add

    -  /path/to/my.cnf:/etc/mysql/my.cnf:ro

in the folder roles/container you’ll find a complet setup of nextcloud.


but maybe for one user on raspi it’s too much. :wink:

docker exec -it db "wget https://raw.githubusercontent.com/BMDan/tuning-primer.sh/master/tuning-primer.sh | /bin/bash"

if you have no bash and or wget in the container you have install it first

docker exec -it --user root db "apt update && apt install bash wget -y"

(didn’t test the syntax. but that’s the way it works.)

occ files:scan --all

if you want to use the hosts cron:

1 Like

wow @Reiner_Nippes
Thanks.

1.

and what does the :ro mean?
I’ll probably use it with

this config from the documentation

documentation

[server]
skip-name-resolve
innodb_buffer_pool_size = 128M
innodb_buffer_pool_instances = 1
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 32M
innodb_max_dirty_pages_pct = 90
query_cache_type = 1
query_cache_limit = 2M
query_cache_min_res_unit = 2k
query_cache_size = 64M
tmp_table_size= 64M
max_heap_table_size= 64M
slow-query-log = 1
slow-query-log-file = /var/log/mysql/slow.log
long_query_time = 1

[client-server]
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/

[client]
default-character-set = utf8mb4

[mysqld]
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
transaction_isolation = READ-COMMITTED
binlog_format = ROW
innodb_large_prefix=on
innodb_file_format=barracuda
innodb_file_per_table=1

So I’ll be setting this up.

2.
If I rebuild the db container, how would I migrate(?) it with the current state of nextcloud?

or do I have to lookup how to do a database dump and reintegration?

EDIT:

I recon I should copy the my.cnf anyway, bc of this. But using an extra file is cleaner, so I would only add the include part.

EDIT2:
… ^^
inside the mz.cnf in the container is already a line:

!includedir /etc/mysql/conf.d/

do i need to add the * or is the / enough to go through the folder?

read only. this file can’t be changed from within the container.
security feature. if an attacker get’s into your container. he can’t change it.

beware NOT to put this into a .cnf file in the /etc/mysql/conf.d/ folder. would include files infinite.
and yes all file in the conf.d folder are included.

the container is stateless. that’s the trick with container.
well. that is to say you have to set it up in a way that all YOUR data are in a volume. and that when you restart the container (or pull an new image and start that) you use the same volume.

it’s just wording. but you rebuild docker images. in fact when you pull an image from docker hub it’s prebuild. you run a container based on that image.

I noticed, but you were already replying…

exactly what data? do you mean the database-part of the container?
Or better: what directory?
EDIT:
from

Database:

  • /var/lib/mysql MySQL / MariaDB Data
  • /var/lib/postgresql/data PostgreSQL Data

$ docker run -d \ -v db:/var/lib/mysql \ mariadb

which I already got as a volume, just gotta make it external before composing again
EDIT END
Currently everything from the db container is within the corresponding db volume

external declaration of the volumes (I better do that for the user files of nextcloud

I meant just doing

sudo docker-compose up -d 

again.

if you want to put everything in different volumes:

$ docker run -d \
    -v nextcloud:/var/www/html \
    -v apps:/var/www/html/custom_apps \
    -v config:/var/www/html/config \
    -v data:/var/www/html/data \
    -v theme:/var/www/html/themes/<your_custom_theme> \
    nextcloud

if you want to put the “data directory” outside the web root.

$ docker run -d \
    -e NEXTCLOUD_DATA_DIR=/var/nc-data \
    -v nextcloud:/var/www/html \
    -v apps:/var/www/html/custom_apps \
    -v config:/var/www/html/config \
    -v data:/var/nc-data \
    -v theme:/var/www/html/themes/<your_custom_theme> \
    nextcloud

docker inspect nextcloud will give you the path to “named volume”. it’s a directory somewhere in the hosts files system.

if you want to put things on a mounted fs:

$ docker run -d \
    -v /opt/nextcloud/root:/var/www/html \
    -v /opt/nextcloud/apps:/var/www/html/custom_apps \
    -v /opt/nextcloud/config:/var/www/html/config \
    -v /opt/nextcloud/data:/var/www/html/data \
    -v /opt/nextcloud/theme:/var/www/html/themes/<your_custom_theme> \
    nextcloud

this is easy to backup. :wink:

plus the the database volumes.

this would just start the containers. and you are up&running again.

but. if you run in between docker exec db apt update && apt install wget bash -y and docker stop db && docker rm db the container with wget and bash would be removed and a new container would be created from the image. the image is defined by the docker file.

if you want to add files to the container “permanent” you either add them via volumes or you add them to the docker file.