Tutorial for running Nextcloud in rootless Podman with mariadb, redis, caddy webserver , all behind a caddy reverse proxy

Hi Everyone,

Hopefully I can help a few users that are trying to run Nextcloud using Podman, behind a Caddy reverse proxy, for a while I struggled to get it working properly but after a lot of research and help from threads in this forum, I finally have Nextcloud working perfectly.

So I run all my containers from Home assistant to Nextcloud in Rootless Podman and I love it , I run each container using systemd unit files rather than Podman compose or docker compose, this has the added benefit of using a simple command to upgrade all my containers. I will show all the individual run commands for each container and how to generate the Systemd unit files

As mentioned in the title , I run Nextcloud with the alpine FPM container , so no integrated webserver, Mariadb as the database , Redis as the cache container and Caddy as the webserver. I run all of these in a Pod , so they communicate between containers using localhost inside the pod these are all behind a seperate Caddy reverse proxy outside of the Pod.

None of the containers are run using network=host so
--net slirp4netns:allow_host_loopback=true,port_handler=slirp4netns is used with the Caddy reverse proxy container and an IP of is used in the Reverse proxy config, this is important not to change this IP or it will not work, the reason behind this IP is outside the scope of this tutorial but a simple web search will help understanding .

Now to the fun stuff :

I will create a few directories and add a Caddyfile for the pod webserver

mkdir -p ~/nextcloud/{db,caddy,html}
cd nextcloud/caddy
mkdir caddy_data   # this removed an error for me running using podman 
nano Caddyfile

this is the Webserver Caddyfile

:80 {

        root * /var/www/html


        redir /.well-known/carddav /remote.php/dav 301
        redir /.well-known/caldav /remote.php/dav 301

        # .htaccess / data / config / ... shouldn't be accessible from outside
        @forbidden {
                path    /.htaccess
                path    /data/*
                path    /config/*
                path    /db_structure
                path    /.xml
                path    /README
                path    /3rdparty/*
                path    /lib/*
                path    /templates/*
                path    /occ
                path    /console.php

        respond @forbidden 404


thanks to the Caddy community forum for all their help on this .

now we create the Pod

podman pod create --network slirp4netns:port_handler=slirp4netns --hostname nextcloud --name nextcloud -p 5080:80/tcp

next the MariaDB database

podman run --detach --label "io.containers.autoupdate=registry" --env MYSQL_ROOT_PASSWORD="randomrootpassword" --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=YourDatabasePassword --volume ${HOME}/nextcloud/db:/var/lib/mysql/:Z --pod nextcloud --restart on-failure --name nextcloud-db docker.io/library/mariadb:10 --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW

then the Redis container

podman run --detach --label "io.containers.autoupdate=registry" --restart on-failure --pod nextcloud --name nextcloud-redis docker.io/library/redis:alpine redis-server --requirepass yourRedispassword

the Nextcloud container

podman run --detach --label "io.containers.autoupdate=registry" --env REDIS_HOST="" --env REDIS_HOST_PASSWORD="yourRedispassword" --env MYSQL_HOST= --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=YourDatabasePassword --volume ${HOME}/nextcloud/html:/var/www/html/:z --volume /YOUR_STORAGE_DRIVE_LOCATION/nextcloud:/var/www/html/data:z --pod nextcloud --restart on-failure --name nextcloud-app docker.io/library/nextcloud:fpm-alpine

and finally the

Caddy webserver:

podman run --detach --label "io.containers.autoupdate=registry" --volume ${HOME}/nextcloud/caddy/caddy_data:/data:Z --volume ${HOME}/nextcloud/caddy/Caddyfile:/etc/caddy/Caddyfile:Z --volume ${HOME}/nextcloud/html:/var/www/html:ro,z --name nextcloud-caddy --pod nextcloud --restart on-failure docker.io/caddy:latest

now you should be able to access the Nextcloud instance to complete the the install and create an Admin user by directing your browser to http://NextcloudhostIP:5080 after your initial install and after testing all seems to work you can generate the systemd unit files and enable the service.

now as this is pod and associated container are run rootless , they are run using my user, so all the generated systemd unit files need to be located in the user home directory and thereafter all the systemctl commands will need to be run with the –user tag

cd ~/.config/systemd/user/
podman generate systemd --new --files --name nextcloud
systemctl --user enable pod-nextcloud.service

the Pod and all assosiated containers are being run by systemd so it should be noted that you will no longer be using podman commands to interact with the containers , to restart or stop the container you will use systemctl like so :

systemctl --user stop container-nextcloud-app.service
systemctl --user restart container-nextcloud-app.service

it should also be noted that in order for you containers and Pod to persist after you logout of ssh you need to use the command

loginctl enable-linger $USER

now you should have a working Nextcloud instance running on your local network. congrats :smile: :partying_face:

Now to make it accessible to outside of you local area network , you need a Caddy reverse proxy , ports open on your router directed to the Server host IP ( please not this is not explained in this tutorial ) and finally a Domain name. I use duckdns for this purpose but others will work to.

for the Caddy reverse proxy

I use this simple Caddyfile

        encode gzip

        header Strict-Transport-Security max-age=15552000;

and this simple systemd unit file

# container-caddy_reverse_proxy.service

Description=Podman container-caddy_reverse_proxy.service

ExecStartPre=/bin/rm \
        -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
        --cidfile=%t/%n.ctr-id \
        --cgroups=no-conmon \
        --rm \
        --sdnotify=conmon \
        --replace \
        --detach \
        --label "io.containers.autoupdate=registry" \
        --name caddy_reverse_proxy \
        --volume ${HOME}/caddy/caddy_data:/data:Z \
        --volume ${HOME}/caddy/Caddyfile:/etc/caddy/Caddyfile:Z \
        --volume ${HOME}/caddy/caddy_config:/config:Z \
        --net slirp4netns:allow_host_loopback=true,port_handler=slirp4netns \
        -p 8080:8080 \
        -p 8443:8443 \
ExecStop=/usr/bin/podman stop \
        --ignore -t 10 \
ExecStopPost=/usr/bin/podman rm \
        -f \
        --ignore -t 10 \


you will probably have to create the directories in the volume’s , to prevent an error with podman

Please note , I run my reverse proxy using port 8080 and 8443 and port forward from my router from 80 and 443 to these respectively , you will need to adjust to suit your requirements, but I would suggest using these unprivileged ports for rootless containers to avoid issues

now you wont be able to access form your domain until you trust these and any other domains from Nextcloud’s config.php file to do this you need to run an interactive shell in the Nextcloud container and run these OCC commands , these will also remove some of the errors that Nextcloud shows in the Overview Tab, such as trusted proxy errors and default_phone_region issues

podman exec -it -u www-data nextcloud-app /bin/sh

php occ config:system:set trusted_domains 1 --value= # your Host IP
php occ config:system:set trusted_domains 2 --value=YOUR_NEXTCLOUD_DOMAIN.duckdns.org
php occ config:system:set trusted_proxies --value=['']  # please note the square brackets need to be here as this is an array, not a string
php occ config:system:set overwrite.cli.url --value "https://YOUR_NEXTCLOUD_DOMAIN.duckdns.org"
php occ config:system:set overwriteprotocol --value "https"
php occ config:system:set default_phone_region --value " For example 'GB' for Great Britain"

your Instance should now be accessible from outside you local area network.

Finally to update all containers run

podman auto-update

this works because of the label --label "io.containers.autoupdate=registry" in every run command.

It should finally be noted that I run all of this on a Fedora server that utilises Selinux, this is why the keen observer will notice the big ‘Z’ and little ‘z’ at the end of each volume , again this is outside the scope of this tutorial but a simple search will explain more.

(EDIT) A little addition , I forgot to mention, how I use system Cron for Background Jobs instead of Ajax.

I change the Editor to nano and add a system crontab as my user:

crontab -e

I then add :

*/5 * * * * podman  exec -t -u www-data nextcloud-app php -f /var/www/html/cron.php

this runs cron.php every 5 minutes, I then change the setting to the Cron(recommended) option in Administration settings => Basic Settings on the web UI.

If you have any questions or adjustments that you think would benefit , please comment below.

I can write another tutorial about running a Eturnal Turnserver , should users request it also I would like to thank @minWi for his tutorial as this helped immensely.

hope this help others with using Podman.


Hi! at first thanks for this tut. I have some questions. I’m running a rootless nextcloud pod myself but did it a bit different. To make this clear I’m running the standard nextcloud with apache2 and an outside of the pod nginx. But that doesnt matter here.

What I’m interested in is. I’m using

podman network create nextcloud-net

As well as attaching a hostname to every pod

podman run [...] --hostname nextcloud-app [...]
podman run [...] --hostname nextcloud-db [...]

Your tuto doesnt do this, but go for the

podman pod create --network slirp4netns [...]


podman run [...] --pod nextcloud

Where, I think what you do is to create a new “controller”-pod and attach the other pods as sub-pots to this one. Now I wonder whats the better way, is there a benefit in creating such a pod where you attach other pods to, compared with standalone pods that only share a network?

This might be a stupid question, I’m still pretty new to this topic.

I think if you go the Pod route , instead of stand alone containers in a custom bridge network. Or what i think your doing , one containers per pod on a bridge network then, I think its best to have all containers that are dependent on each other to be in the same pod so they can network between themselves on localhost.

I have standalone containers outside of the Pod as well but not associated with Nextcloud and not in individual pods.

I think the benefit of pods apart from networking containers on local host is to cluster all dependant containers in one group. So having a pod per container probably isn’t optimum, but if it works then ,no need to break what doesn’t need fixing.

However I’m not an expert by any standard , so you might need to ask for more advice on the podman discord channel or their matrix chat room.

1 Like

Thanks for the great guide! I wonder how I can access the Nextcloud instance from within the local network? Adding the reverse proxy seems to remove the ability to do that.

Its the setting

overwriteprotocol => "https"

in the config.php that does it , if you remove this you can access and login on localhost as well, but then you get an error in nextcloud that some url’s are being sent insecurely. You can access local via https://yourlocalIP:5080 but unfortunately it won’t log in. I’ve not found a work around yet for this.