Tutorial for running Nextcloud in rootless Podman with mariadb, redis, caddy webserver , all behind a caddy reverse proxy

Hi Everyone,

Hopefully I can help a few users that are trying to run Nextcloud using Podman, behind a Caddy reverse proxy, for a while I struggled to get it working properly but after a lot of research and help from threads in this forum, I finally have Nextcloud working perfectly.

So I run all my containers from Home assistant to Nextcloud in Rootless Podman and I love it , I run each container using systemd unit files rather than Podman compose or docker compose, this has the added benefit of using a simple command to upgrade all my containers. I will show all the individual run commands for each container and how to generate the Systemd unit files

As mentioned in the title , I run Nextcloud with the alpine FPM container , so no integrated webserver, Mariadb as the database , Redis as the cache container and Caddy as the webserver. I run all of these in a Pod , so they communicate between containers using localhost inside the pod these are all behind a seperate Caddy reverse proxy outside of the Pod.

None of the containers are run using network=host so
--net slirp4netns:allow_host_loopback=true,port_handler=slirp4netns is used with the Caddy reverse proxy container and an IP of 10.0.2.2 is used in the Reverse proxy config, this is important not to change this IP or it will not work, the reason behind this IP is outside the scope of this tutorial but a simple web search will help understanding .

Now to the fun stuff :

I will create a few directories and add a Caddyfile for the pod webserver

mkdir /YOUR_STORAGE_DRIVE_LOCATION/nextcloud
mkdir -p ~/nextcloud/{db,caddy,html}
cd nextcloud/caddy
mkdir caddy_data   # this removed an error for me running using podman 
nano Caddyfile

this is the Webserver Caddyfile

:80 {

        root * /var/www/html
        file_server

        php_fastcgi 127.0.0.1:9000

        redir /.well-known/carddav /remote.php/dav 301
        redir /.well-known/caldav /remote.php/dav 301

        # .htaccess / data / config / ... shouldn't be accessible from outside
        @forbidden {
                path    /.htaccess
                path    /data/*
                path    /config/*
                path    /db_structure
                path    /.xml
                path    /README
                path    /3rdparty/*
                path    /lib/*
                path    /templates/*
                path    /occ
                path    /console.php
        }

        respond @forbidden 404

}

thanks to the Caddy community forum for all their help on this .

now we create the Pod

podman pod create --network slirp4netns:port_handler=slirp4netns --hostname nextcloud --name nextcloud -p 5080:80/tcp

next the MariaDB database

podman run --detach --label "io.containers.autoupdate=registry" --env MYSQL_ROOT_PASSWORD="randomrootpassword" --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=YourDatabasePassword --volume ${HOME}/nextcloud/db:/var/lib/mysql/:Z --pod nextcloud --restart on-failure --name nextcloud-db docker.io/library/mariadb:10 --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW

then the Redis container

podman run --detach --label "io.containers.autoupdate=registry" --restart on-failure --pod nextcloud --name nextcloud-redis docker.io/library/redis:alpine redis-server --requirepass yourRedispassword

the Nextcloud container

podman run --detach --label "io.containers.autoupdate=registry" --env REDIS_HOST="127.0.0.1" --env REDIS_HOST_PASSWORD="yourRedispassword" --env MYSQL_HOST=127.0.0.1 --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=YourDatabasePassword --volume ${HOME}/nextcloud/html:/var/www/html/:z --volume /YOUR_STORAGE_DRIVE_LOCATION/nextcloud:/var/www/html/data:z --pod nextcloud --restart on-failure --name nextcloud-app docker.io/library/nextcloud:fpm-alpine

and finally the

Caddy webserver:

podman run --detach --label "io.containers.autoupdate=registry" --volume ${HOME}/nextcloud/caddy/caddy_data:/data:Z --volume ${HOME}/nextcloud/caddy/Caddyfile:/etc/caddy/Caddyfile:Z --volume ${HOME}/nextcloud/html:/var/www/html:ro,z --name nextcloud-caddy --pod nextcloud --restart on-failure docker.io/caddy:latest

now you should be able to access the Nextcloud instance to complete the the install and create an Admin user by directing your browser to http://NextcloudhostIP:5080 after your initial install and after testing all seems to work you can generate the systemd unit files and enable the service.

now as this is pod and associated container are run rootless , they are run using my user, so all the generated systemd unit files need to be located in the user home directory and thereafter all the systemctl commands will need to be run with the –user tag

cd ~/.config/systemd/user/
podman generate systemd --new --files --name nextcloud
systemctl --user enable pod-nextcloud.service

the Pod and all assosiated containers are being run by systemd so it should be noted that you will no longer be using podman commands to interact with the containers , to restart or stop the container you will use systemctl like so :

systemctl --user stop container-nextcloud-app.service
systemctl --user restart container-nextcloud-app.service

it should also be noted that in order for you containers and Pod to persist after you logout of ssh you need to use the command

loginctl enable-linger $USER

now you should have a working Nextcloud instance running on your local network. congrats :smile: :partying_face:

Now to make it accessible to outside of you local area network , you need a Caddy reverse proxy , ports open on your router directed to the Server host IP ( please not this is not explained in this tutorial ) and finally a Domain name. I use duckdns for this purpose but others will work to.

for the Caddy reverse proxy

I use this simple Caddyfile

YOUR_NEXTCLOUD_DOMAIN.duckdns.org {
        encode gzip

        header Strict-Transport-Security max-age=15552000;
        reverse_proxy http://10.0.2.2:5080
}

and this simple systemd unit file

# container-caddy_reverse_proxy.service

[Unit]
Description=Podman container-caddy_reverse_proxy.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm \
        -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
        --cidfile=%t/%n.ctr-id \
        --cgroups=no-conmon \
        --rm \
        --sdnotify=conmon \
        --replace \
        --detach \
        --label "io.containers.autoupdate=registry" \
        --name caddy_reverse_proxy \
        --volume ${HOME}/caddy/caddy_data:/data:Z \
        --volume ${HOME}/caddy/Caddyfile:/etc/caddy/Caddyfile:Z \
        --volume ${HOME}/caddy/caddy_config:/config:Z \
        --net slirp4netns:allow_host_loopback=true,port_handler=slirp4netns \
        -p 8080:8080 \
        -p 8443:8443 \
        docker.io/caddy:latest
ExecStop=/usr/bin/podman stop \
        --ignore -t 10 \
        --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm \
        -f \
        --ignore -t 10 \
        --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=default.target

you will probably have to create the directories in the volume’s , to prevent an error with podman

Please note , I run my reverse proxy using port 8080 and 8443 and port forward from my router from 80 and 443 to these respectively , you will need to adjust to suit your requirements, but I would suggest using these unprivileged ports for rootless containers to avoid issues

now you wont be able to access form your domain until you trust these and any other domains from Nextcloud’s config.php file to do this you need to run an interactive shell in the Nextcloud container and run these OCC commands , these will also remove some of the errors that Nextcloud shows in the Overview Tab, such as trusted proxy errors and default_phone_region issues

podman exec -it -u www-data nextcloud-app /bin/sh

php occ config:system:set trusted_domains 1 --value=192.168.1.160 # your Host IP
php occ config:system:set trusted_domains 2 --value=YOUR_NEXTCLOUD_DOMAIN.duckdns.org
php occ config:system:set trusted_proxies --value=['192.168.1.160']  # please note the square brackets need to be here as this is an array, not a string
php occ config:system:set overwrite.cli.url --value "https://YOUR_NEXTCLOUD_DOMAIN.duckdns.org"
php occ config:system:set overwriteprotocol --value "https"
php occ config:system:set default_phone_region --value " For example 'GB' for Great Britain"

your Instance should now be accessible from outside you local area network.

Finally to update all containers run

podman auto-update

this works because of the label --label "io.containers.autoupdate=registry" in every run command.

It should finally be noted that I run all of this on a Fedora server that utilises Selinux, this is why the keen observer will notice the big ‘Z’ and little ‘z’ at the end of each volume , again this is outside the scope of this tutorial but a simple search will explain more.

(EDIT) A little addition , I forgot to mention, how I use system Cron for Background Jobs instead of Ajax.

I change the Editor to nano and add a system crontab as my user:

EDITOR=nano
crontab -e

I then add :

*/5 * * * * podman  exec -t -u www-data nextcloud-app php -f /var/www/html/cron.php

this runs cron.php every 5 minutes, I then change the setting to the Cron(recommended) option in Administration settings => Basic Settings on the web UI.

If you have any questions or adjustments that you think would benefit , please comment below.

I can write another tutorial about running a Eturnal Turnserver , should users request it also I would like to thank @minWi for his tutorial as this helped immensely.

hope this help others with using Podman.

8 Likes

Hi! at first thanks for this tut. I have some questions. I’m running a rootless nextcloud pod myself but did it a bit different. To make this clear I’m running the standard nextcloud with apache2 and an outside of the pod nginx. But that doesnt matter here.

What I’m interested in is. I’m using

podman network create nextcloud-net

As well as attaching a hostname to every pod

podman run [...] --hostname nextcloud-app [...]
podman run [...] --hostname nextcloud-db [...]

Your tuto doesnt do this, but go for the

podman pod create --network slirp4netns [...]

[…]

podman run [...] --pod nextcloud

Where, I think what you do is to create a new “controller”-pod and attach the other pods as sub-pots to this one. Now I wonder whats the better way, is there a benefit in creating such a pod where you attach other pods to, compared with standalone pods that only share a network?

This might be a stupid question, I’m still pretty new to this topic.

I think if you go the Pod route , instead of stand alone containers in a custom bridge network. Or what i think your doing , one containers per pod on a bridge network then, I think its best to have all containers that are dependent on each other to be in the same pod so they can network between themselves on localhost.

I have standalone containers outside of the Pod as well but not associated with Nextcloud and not in individual pods.

I think the benefit of pods apart from networking containers on local host is to cluster all dependant containers in one group. So having a pod per container probably isn’t optimum, but if it works then ,no need to break what doesn’t need fixing.

However I’m not an expert by any standard , so you might need to ask for more advice on the podman discord channel or their matrix chat room.

1 Like

Thanks for the great guide! I wonder how I can access the Nextcloud instance from within the local network? Adding the reverse proxy seems to remove the ability to do that.

Its the setting

overwriteprotocol => "https"

in the config.php that does it , if you remove this you can access and login on localhost as well, but then you get an error in nextcloud that some url’s are being sent insecurely. You can access local via https://yourlocalIP:5080 but unfortunately it won’t log in. I’ve not found a work around yet for this.

Thank you for this tutorial. I now have a nextcloud running on Fedora Silverblue 39. I needed a few modifications to get everything working…

I needed to add "–cgroup-manager=cgroupfs " to the ‘podman pod create’ and ‘podman run’ commands.

the following ‘occ’ commands run inside the -app container cleaned up some errors under the Admin Basic Settings:

php occ config:system:set default_phone_region --value=‘US’
php occ config:system:set trusted_proxies 0 --value=‘192.168.0.0/16’
php occ config:system:set overwriteprotocol --value=‘https’

modify for your phone region and local network, of course. I may have been overly loose on my trusted_proxies setting, but it worked for me, and I trust my LAN.


Using the concepts I learned here, it was a simple thing for me to spin up a piwigo server on the same container host for photos!

Thanks for the guide.

I have followed it and ran into a problem

Everything went up nice and easy,

but the only thing i ran into is when i want to setup my nextcloud admin account in the beginning.

I get the following error

Error
Cannot create or write into the data directory /var/www/html/data

@wingzero I don’t run Nexcloud using podman or containers since switching to NixOS , so I might not be able to debug this. However if your running fedora or an OS with Selinux , my suspicion would be that , try disabling selinux temporarily and see if you can run the container without issues.

Sounds like a permissions issue to me. Some things to check:

  • Make sure the directory is owned by the correct user. Fair warning, this gets a little messy with rootless Podman.
    You can read up here and here on how to properly assign permissions for rootless.

    • podman unshare chown -R user:group <host_directory>
  • Make sure your volumes are mounted with :z at the end. This option tells SELinux that the container is allowed to access that volume. You can use @greylinux1’s suggestion as a quick way to check if SELinux is the culprit.

    • --volume <your_nextcloud_data_dir>:/var/www/html/data:z

Hi, thx for this tutorial.
could you add the complete “podman run” command for starting your reverse proxy?
I could get most of the data out of your systemd unit file but i’m not sure i get the run command completely right.

EDIT: i see, you are starting the container direct as a systemd service.
I thought podman could create the service as it does for pods.
Therefor i was slightly confused.

Edit2: let podman create the systemd unit file is also possible by the same command as for the pod
→ “podman generate systemd --new --files --name %name-of-container%”

i hope it’s ok to make a suggestion.
you could use podman secrets to avoid using plaintext passwords in start parameters or systemd
https://www.redhat.com/sysadmin/podman-kubernetes-secrets

regards
Salzi

Thank you for this tutorial, it was very helpful.

I’ve made a podman quadlet version, see nextcloud-quadlets.

Nextcloud is exposed on port 8080, all container data are stored in /var/pods/nextcloud.

/m

Hi there,

You add the caddy instance to this pod, with configuration files together with the other nextcloud files. How would you approach this if I wanted to have more services behind the caddy instance on the same vm? Put the caddy instance outside of the pod?

I was stuck on this problem too. I was “installing” the db, caddy and html inside of data folder. Just create a ~/nextcloud and add db, caddy and html folder there, and create another folder ~/homecenter/nextcloud and worked!

I’m stuck at podman run --detach --label "io.containers.autoupdate=registry" --env REDIS_HOST="127.0.0.1" --env REDIS_HOST_PASSWORD="yourRedispassword" --env MYSQL_HOST=127.0.0.1 --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=YourDatabasePassword --volume ${HOME}/nextcloud/html:/var/www/html/:z --volume /YOUR_STORAGE_DRIVE_LOCATION/nextcloud:/var/www/html/data:z --pod nextcloud --restart on-failure --name nextcloud-app docker.io/library/nextcloud:fpm-alpine
it says no pod with next cloud name or ID exist. Though I have next cloud pod when podman pod ls

Has anyone gotten a turn server to work? I find the guides to coturn lacking D:

Originally with the setup described above I had a turnserver working perfectly, exposed on an external port , but since my podman setup days , I’ve since moved to NixOS for a year and more recently back to a good old Debian server.

Both the NixOS and Debian server have coturn working perfectly for video calls on Talk, however both setups are via a self hosted wireguard VPN so only one exposed port and “technically” that means coturn is only operating in my local network. I can post my config for the Debian server and my nix config for the Nix setup if it helps , but again this is through the VPN which I think is a better more secure setup overall.

I apologise to others who have posted , I no longer have a podman setup and so would not be the best to advise with possible solutions.

1 Like

If you could post the debian one, that could give me some ideas. Thanks for the guide also, it got me a mostly working nextcloud with some tweaking :smiley:

Sorry for the delay in replying,

so simple instructions to get started are:

generate a coturn secret key

pwgen -s 64 1

then install coturn and edit the file

sudo nano /etc/default/coturn

## uncomment 

TURNSERVER_ENABLED=1

## then start coturn 

sudo systemctl start coturn

then move existing config to a backup location and create a new config file

sudo mv /etc/turnserver.conf /etc/turnserver.conf.backup

sudo nano /etc/turnserver.conf

the config I use is ( dont forget to add your coturn secret generated earlier):

listening-port=3478
tls-listening-port=5349
alt-listening-port=3479
alt-tls-listening-port=5350


min-port=49800
max-port=50000

use-auth-secret
static-auth-secret=  ## Add you coturn secret key here

realm=debian

no-tcp-relay

no-stdout-log
syslog

no-cli
cli-ip=127.0.0.1
cli-port=5766

then in your Nextcloud go to administration settingsTalk

and add the key you generated to the turn server tab and change the settings to look like this

obviously change the url to the url of your turnserver, remember mine is local through the VPN so only a local IP was needed

also note that I only use 200 relay ports as seen by min and max ports above, these numbers suit my needs as I only have a few users, if you have a lot then add more ports. Also as its only local I don’t have a firewall on the server as its all behind my router firewall. If you run a firewall on your server you will need to expose these ports as well as the listening ports locally and through your router firewall if you don’t use a VPN like I do.

I run a VPN as I only like having one port exposed on my router, not all the ports listed above plus 80 and 443 for Nextcloud, plus certbot, etc. etc.

and that’s it, I am able to use talk for video calls locally and remotely securely through a VPN

hope this helps

2 Likes