How to nextcloud with postgres in docker?

I would like to run nextcloud with postgres as a database and deploy that on my Qnap server. I have experimented with the nextcloud docker image and that works fine. Problems start when I try to connect it with a postgresql database. When I use nextcloud and postgresql in a docker-compose file I seem not to be able to have nextcloud use the postgresql database. Well, nextcloud keeps complaining I’m still using sqlite. Could someone help me out with this?

My docker-compose.yml:

version: '3'

volumes:
  nextcloud:

services:
  db:
    image: postgres:10.12-alpine # use version 10.12 of postgres, still works with pgadmin3
    restart: always
    ports:
      - '5433:5432' # expose 5433 on host and sent to 5432 in container
    volumes:
      - /share/files/dbms/pg-data-nextcloud:/var/lib/postgresql/data
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=<password>
    
  app:
    image: nextcloud
    restart: always
    ports:
      - 8082:80
    volumes:
      - nextcloud:/var/www/html
    environment:
      - POSTGRES_HOST=db
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=<password>
    depends_on:
      - db

I think you’re missing the POSTGRES_DB variable on the Nextcloud container.

https://github.com/docker-library/docs/blob/master/nextcloud/README.md#auto-configuration-via-environment-variables

Also keep in mind these envvars just answer questions during the initial setup wizard. If your instance was already running on SQLite, I don’t think you can change it this way. Test on a brand new compose group.

you want to start new or convert your sqlite installation to postgres?

Thanks @KarlF12 and @Reiner_Nippes,

It took me another day but I solved the problem. Your questions made it clear that I had not explained well what I wanted.

I want to create a new nextcloud service running on a new postgresql service. That can be easily realised with docker-compose. The listed docker-compose.yml does this in an instance. @KarlF12 you were right with the POSTGRES_DB variable: I set this consistently over the docker-compose file (- POSTGRES_DB=nextcloud_db) but this was not enough. On which host is postgresql to be found?

Trouble is that both nextcloud and postgres work as different services on different hosts in the docker virtual network. What I did not realise (I am really a Docker noob, and nextcloud too) is that both services act as different hosts: one at IP 172.incrementing number.0.2 and the other at IP 172.same incrementing number.0.3. I had to set the POSTGRESQL_HOST variable, but as the number increments at each docker-composer up I was unable to use the IP address.

However, when both instances are running there are two services: nextcloud_db_1 and nextcloud_app_1. These names can be used as host names so by setting POSTGRESQL_HOST=nextcloud_db_1 nextcloud was able to find the host on which the postgresql service was running.

That was a hefty introduction to Docker networking to me. And nextcloud too. But with the current setup I can create a nextcloud instance using postgresql without fiddling with any config file. And it is easily deployable. Two considerations:

  • when you deploy look carefully which data directory is used by postgresql, that really depends on where you deploy this docker-compose file and is probably the only variable that changes when deploying the docker-compose file. If you want it really simple you may opt for a named mount, e.g. - pgdata:/var/lib/postgresql/data. Don’t forget to mention pgdata in the volumes verb.
  • It is not necessary to expose the postgresql ports, but I want to create a good backup regimen for nextcloud (still to figure out how) and it might be useful to access the postgresql service. I expose port 5433 because I have already a postgres docker service running on my server.

docker-compose.yml:

version: '3'

volumes:
  nextcloud:

services:
  db:
    image: postgres:10.12-alpine # use version 10.12 of postgres, still works with pgadmin3
    restart: always
    
    # Postgres port 5432 is open for access, meaning that nextcloud (and any other
    # service in this docker) should access postgres over port 5432.
    # However, this port is *exposed* to the outside world as 5433 and is mapped 
    # to 5432.
    ports:
      - 5433:5432 # expose 5433 on host and sent to 5432 in container
    volumes:
      - <your host/server data directory>:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=nextcloud_db
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=<Password>
    
  app:
    image: nextcloud
    restart: always
    ports:
      - 8082:80
    volumes:
      - nextcloud:/var/www/html
    environment:
      - POSTGRES_HOST=nextcloud_db_1 # service name for postgres as assigned by Docker
      - POSTGRES_DB=nextcloud_db
      - POSTGRES_USER=postgres # will access postgres over 5432
      - POSTGRES_PASSWORD=<password>
    depends_on:
      - db

no need to expose the port. just run pg_dump in the container.

further explanation here:

This part is actually really easy. Docker will resolve the service name you assign the container to the IP address it gives it. So: POSTGRESQL_HOST=db

You can run pg_dump in the container as mentioned, and create a mount Point for where you dump it so the dump is easily accessible on the host. This is probably the better option.

You can also expose the port only to 127.0.0.1 so only the host running the container can access the port.

@Reiner_Nippes, what I didn’t gather from the nextcloud docs on backup: is it sufficient to backup the database or should I backup /var/www/html as well?

I have to confess I don’t see how I can make a pg_dump from inside the repository. If you can point me to a place where such things are explained I’d welcome that.

I am not sure whether I am up to restic. My simple home environment may not need it.

@KarlF12, Docker surprises me each time with simpler solutions than I dream up myself. I did try this solution but it didn’t work. Most probably because of the problems you signalled earlier, I’ll try it again.

As for pg_dump, the same as I mentioned to Reiner_Nippes, if you can point me to some info/website/whatever I’d be grateful.

I forgot to say what a great combination this all is: docker with nextcloud and postgres. I am really surprised by how well it all combines.

It is NOT sufficient to only back up the database because it doesn’t contain any of your files. It does contain many other things so you do need it.

Also note that if you’re using Docker, the /var/www/html you need is inside a container or mount point, with the exact location depending on your setup. Based on your compose file, you’re using a non-mounted Docker volume. If you were to back up /var/www/html on the host OS, this would not be where your data actually is.

You can run commands inside a container like this:

docker exec -it -u user container command

For example if you wanted to open a shell in your Nextcloud container to run OCC:

docker exec -it -u www-data app /bin/bash

Now since you’re using a non-mounted Docker volume (meaning not mounted to a folder on the host) you will need to copy data out of the container, probably by running pg_dump with docker exec and then copying it out to the host with docker cp.

pg_dump writes to stdout. it doesn’t matter if you run it inside a container.
so you simply redirect the stdout to your backup folder.

sudo docker exec db pg_dump -c -U postgres nextcloud_db > /path/to/your/backup/folder/db_dump_nextcloud.sql

the command executed “inside” the container ends before the > . so the output is written to your host. not inside the container.

@nocom sudo docker inspect --format '{''{ .Mounts }''}' nextcloud will tell you where you find the volume nextcloud in /var/lib/docker/volumes.

you defined this here:
grafik

if you may check with sudo ls -l /var/lib/docker/volumes/...../_data
replace /...../ with the uuid give by the docker inspect command.

at least include the config path in your backup.

or put the config folder in a separate folder on your host.

    volumes:
      - nextcloud:/var/www/html
      - /opt/nextcloud/config:/var/www/html/config

(copy your existing config before you add this line to your docker compose file. :wink:

It took some time before I could apply all your suggestions.

POSTGRESQL_HOST=db works fine, that makes things a lot simpler, thanks for that.

docker exec is great. It is almost magic.

Backing up: I succeeded at last, but that took some time. What i wanted to do is transfer my data that is now in the named mount “nextcloud” to a directory outside docker so that it is accessible from the host. In that way I could easily back it up. If I succeeded to do that I would also have a procedure for backup and restore.

docker cp was mentioned in the replies. I didn’t know it and that was exactly what I needed, but… it doesn’t work, well. In order to make a good backup I need to make a copy of the data that preserves permissions and ownership. docker cp -a should do the trick, but just that doesn’t work: in my case docker cp -a works exactly the same as docker cp: all copied files are either mine or root (when using sudo). I couldn’t find any mention of this problem and decided to give up on this angle.

docker.com provides a nifty way of making backups: create a mount in an on-the-fly container using --volumes-from and tar the data out of it.

docker run --rm --volumes-from nextcloud_app_1 -v $(pwd):/backup nextcloud tar cvf /backup/backup.tar /var/www/html

Unpack it on the host, stop and remove the container, modify the compose files so that the named mount becomes a bind mount and points to the unpacked files. Start the container again and everything works as if no named mount ever existed. And that directory I can easily backup and restore. Oh, the joy of it all.

Next stop is secrets, my passwords are a bit too exposed.

Thanks for all your help and suggestions @KarlF12 and @Reiner_Nippes! I never would have gotten this far without your helpful advice.

Two other things you can do here just for reference. If you use a host-mounted volume instead of a normal Docker volume, the permissions will be correct, and you won’t need to use docker cp at all. The files would be directly accessible by the host at the mount point. The host may show a number if it doesn’t have the user on file, but the permissions will be correct.

Another option is you could tar the backup from inside the container and have tar preserve permissions, then use docker cp to copy the tar out.

I’m not sure on the Docker secrets. I was looking into that a while ago myself, but the docs seemed to indicate that was for swarm mode only.

Note : Docker secrets are only available to swarm services, not to standalone containers. To use this feature, consider adapting your container to run as a service. Stateful containers can typically run with a scale of 1 without changing the container code.

What a pity! I’ll have to see what other methods I can use to hide my password. Oh well, it’s all part of the Docker education I gues. I already have bindmounts for the postgres and nextcloud directories for the exact reason you mentioned. And I have implemented a backup regime.

I now have nextcloud and it is surprisingly well integrated with Gnome and thunderbird. It’s a joy to use.

Thanks and greetings!