[howto] Nextcloud as a rootless container using podman play kube with CentOS8 Stream

Copy & paste of my blog:

Introduction

I’ve been using Nextcloud for a few years as my personal ‘file storage cloud’. There are official container images and docker-compose files to be able to run it easily.

For quite a while, I’ve been using the nginx+redis+mariadb+cron docker-compose file as it has all the components to be able to run an ‘enterprise ready’ Nextcloud, even if I’m only using it for personal use :slight_smile:

In this blog post I’m going to try to explain how do I moved from that docker-compose setup to a podman rootless and systemd one.

Old setup

The hardware where this has been running is a good old HP N54L that it’s been serving me since quite a while, powered by CentOS 7, docker… and ZFS!

Why ZFS? Well… there are a lot of posts out there explaining why ZFS, but the ability to perform automated & zero cost snapshots with zfs-auto-snapshot was key.
On a side note, check systemd-zpool-scrub to automate your ZFS integrity checks (and my humble contribution)

The docker-compose file looks like this:

version: '3'

services:
  db:
    image: mariadb
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    restart: always
    volumes:
      - /tank/nextcloud-db/db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD="xxx"
    env_file:
      - db.env

  redis:
    image: redis:alpine
    restart: always

  app:  
    image: nextcloud:fpm-alpine
    restart: always
    volumes:
      - /tank/nextcloud/html:/var/www/html
    environment:
      - MYSQL_HOST=db
      - REDIS_HOST=redis
    env_file:
      - db.env
    depends_on:
      - db
      - redis

  web:
    build: ./web
    restart: always
    volumes:
      - /tank/nextcloud/html:/var/www/html:ro
    environment:
      - VIRTUAL_HOST=xxx.xxx.com
      - LETSENCRYPT_HOST=xxx.xxx.com
      - LETSENCRYPT_EMAIL=xxx@xxx.com
    depends_on:
      - app
    networks:
      - proxy-tier
      - default

  cron:
    image: nextcloud:fpm-alpine
    restart: always
    volumes:
      - /tank/nextcloud/html:/var/www/html
    entrypoint: /cron.sh
    depends_on:
      - db
      - redis

  proxy:
    build: ./proxy
    restart: always
    security_opt:
      - label:disable
    ports:
      - 80:80
      - 443:443
    labels:
      com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
    volumes:
      - /tank/nextcloud/certs:/etc/nginx/certs:ro
      - /tank/nextcloud/vhost.d:/etc/nginx/vhost.d
      - /tank/nextcloud/html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro
    networks:
      - proxy-tier

  letsencrypt-companion:
    image: jrcs/letsencrypt-nginx-proxy-companion
    restart: always
    security_opt:
      - label:disable
    volumes:
      - /tank/nextcloud/certs:/etc/nginx/certs
      - /tank/nextcloud/vhost.d:/etc/nginx/vhost.d
      - /tank/nextcloud/html:/usr/share/nginx/html
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - proxy-tier
    depends_on:
      - proxy

networks:
  proxy-tier:

The customizations to allow bigger uploads and the custom nginx settings can be found in the official Nextcloud repository as well

This was very handy for a few reasons:

  • It is the ‘official’ way to run Nextcloud properly using containers
  • It uses the letsencrypt-nginx-proxy-companion
    to provide TLS certificates without a sweat
  • It works!

Moving to CentOS 8

For quite a while, I’ve been struggling to move all the services I’m using at home to a new box… because they just work!

The new box is a Slimbook One with better specs besides storage… so I’ve repurposed the old N54L to be a file storage server only (still CentOS7 with ZFS but I’m planning to reinstall it with FreeBSD… let’s see when that happens :D)

The Slimbook One was purchased thanks to a 200 euros discount I earned thanks to my contributions to open source projects… even if those are very small… so I encourage you to be an active contributor, every small change counts!

I decided to install CentOS 8 as a natural evolution and because I’m biased :slight_smile: The only minor detail is that CentOS 8 doesn’t include moby or docker-compose out of the box… and I’m familiar with podman… so I thought to give it a try.

Moving to CentOS Stream

There has been a LOT of noise with regards the Red Hat announcement to shift from CentOS Linux to CentOS Stream but I took this as an opportunity to learn more about how CentOS Stream works and to be ahead of RHEL.

In any case, moving to CentOS Stream was as simple as:

sudo dnf install centos-release-stream
sudo dnf swap centos-{linux,stream}-repos
sudo dnf distro-sync

Profit!

Podman in CentOS Stream

This took me a while as turns out podman rootless didn’t work properly in CentOS… so I ended up using the unofficial podman builds from kubic:

sudo dnf -y module disable container-tools
sudo dnf -y install 'dnf-command(copr)'
sudo dnf -y copr enable rhcontainerbot/container-selinux
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_8_Stream/devel:kubic:libcontainers:stable.repo
sudo dnf -y install podman
sudo dnf -y update

Crun

I decided to use crun instead runc as container runtime because why not?

sudo dnf install -y crun
cat << EOF > ~/.config/containers/containers.conf
[engine]
runtime="crun"
EOF

Running a rootless Nextcloud pod

Instead of running Nextcloud as independant containers, I’ve decided to leverage one of the multiple podman features which is being able to run multiple containers as a pod (like a kubernetes pod!)

The main benefit to me of doing so is they they use a single network namespace, meaning all the containers running in the same pod can reach each other using localhost and you only need to expose the web interface. So for instance the mysql or redis traffic doesn’t leave the pod. Pretty cool huh?

First thing first, I created a folder to host some data, scripts, etc. as:

export PODNAME="nextcloud"
mkdir -p ~/containers/nextcloud/{db,nginx,html}

Where:

  • db will host the database
  • nginx contains the custom nginx.conf file
  • html will host the Nextcloud content

And created an empty pod exposing port 8080/tcp only

podman pod create --hostname ${PODNAME} --name ${PODNAME} -p 8080:80

Next step… start adding containers by running them with the --pod flag.

MariaDB container

podman run \
-d --restart=always --pod=${PODNAME} \
-e MYSQL_ROOT_PASSWORD="myrootpass" \
-e MYSQL_DATABASE="nextcloud" \
-e MYSQL_USER="nextcloud" \
-e MYSQL_PASSWORD="mynextcloudpass" \
-v ${HOME}/containers/nextcloud/db:/var/lib/mysql:Z \
--name=${PODNAME}-db docker.io/library/mariadb:latest \
--transaction-isolation=READ-COMMITTED --binlog-format=ROW

As you careful reader has probably observed, I didn’t used the -p flag to expose the container to the outside world… because running it in a pod makes it reachable as localhost 3306/tcp port.

Selinux disclaimer

The :z and :Z flags are important if you use SElinux… because you use SElinux right?

Quoting the podman-run man:

To change a label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Podman to relabel file objects on the shared volumes. The z option tells Podman that two containers share the volume content. As a result, Podman labels the content with a shared content label. Shared volume labels allow all containers to read/write content. The Z option tells Podman to label the content with a private unshared label.

Redis

podman run \
-d --restart=always --pod=${PODNAME} \
--name=${PODNAME}-redis docker.io/library/redis:alpine \
redis-server --requirepass yourpassword

It will listen into the 6379/tcp port ONLY within the pod.

Nextcloud App

podman run \
-d --restart=always --pod=${PODNAME} \
-e REDIS_HOST="localhost" \
-e REDIS_HOST_PASSWORD="yourpassword" \
-e MYSQL_HOST="localhost" \
-e MYSQL_USER="nextcloud" \
-e MYSQL_PASSWORD="mynextcloudpass" \
-e MYSQL_DATABASE="nextcloud" \
-v ${HOME}/containers/nextcloud/html:/var/www/html:z \
--name=${PODNAME}-app docker.io/library/nextcloud:fpm-alpine

It will listen into the 9000/tcp port ONLY within the pod.

Nextcloud Cron

podman run \
-d --restart=always --pod=${PODNAME} \
-v ${HOME}/containers/nextcloud/html:/var/www/html:z \
--entrypoint=/cron.sh \
--name=${PODNAME}-cron docker.io/library/nextcloud:fpm-alpine

Nginx

I’ve copied the ‘official’ nginx.conf to the proper location:

curl -o ~/containers/nextcloud/nginx/nginx.conf https://raw.githubusercontent.com/nextcloud/docker/master/.examples/docker-compose/with-nginx-proxy/mariadb-cron-redis/fpm/web/nginx.conf

Then to run the container:

podman run \
-d --restart=always --pod=${PODNAME} \
-v ${HOME}/containers/nextcloud/html:/var/www/html:ro,z \
-v ${HOME}/containers/nextcloud/nginx/nginx.conf:/etc/nginx/nginx.conf:ro,Z \
--name=${PODNAME}-nginx docker.io/library/nginx:alpine

It will listen into the 80/tcp port… and as the pod expose that port as 8080/tcp in the host, you will be able to reach the app!

Nextcloud installation

Once all the pods are up and running, it is time to tweak the Nextcloud default deployment to fit our environment:

  • Connect to the nextcloud-app container:
podman exec -it -u www-data nextcloud-app /bin/sh
  • Perform the installation:
php occ maintenance:install \
--database "mysql" \
--database-host "127.0.0.1" \
--database-name "nextcloud" \
--database-user "nextcloud" \
--database-pass "mynextcloudpass" \
--admin-pass "password" \
--data-dir "/var/www/html"
  • Configure a few settings such as the trusted domains:
php occ config:system:set \
trusted_domains 1 --value=192.168.1.98
php occ config:system:set \
trusted_domains 2 --value=nextcloud.example.com
php occ config:system:set \
overwrite.cli.url --value "https://nextcloud.example.com"
php occ config:system:set \
overwriteprotocol --value "https"

NextCloud resets the data directory permissions to 770, but nginx requires to access that folder, otherwise it complains about file not found. I tried to use --group-add flags to force group allocation of the user running both nginx and nextcloud but they run as root and then they change to a different user (www-data and nginx) so the group is not inherited…

php occ config:system:set \
check_data_directory_permissions --value="false" --type=boolean

The reason behind the directory permissions is here.

sudo chmod 775 ~/containers/nextcloud/html
podman pod restart nextcloud

Firewall

In order to be able to reach the pod from the outside world, you just need to open the 8080/tcp port as:

sudo firewall-cmd --add-port=8080/tcp
sudo firewall-cmd --add-port=8080/tcp --permanent

At this point, you have a proper Nextcloud pod running in your box that you can start using!!!

Nextcloud in container user IDs

The nextcloud process running in the container runs as the www-data user which
in fact is the user id 82:

$ podman exec -it nextcloud-app /bin/sh
/var/www/html # ps auxww | grep php-fpm
    1 root      0:10 php-fpm: master process (/usr/local/etc/php-fpm.conf)
   74 www-data  0:16 php-fpm: pool www
   75 www-data  0:15 php-fpm: pool www
   76 www-data  0:07 php-fpm: pool www
   84 root      0:00 grep php-fpm
/var/www/html # grep www-data /etc/passwd
www-data:x:82:82:Linux User,,,:/home/www-data:/sbin/nologin

NFS and user IDs

NFS exports can be configured to have a forced uid/gid using the anonuid,
anongid and all_squash parameters. For Nextcloud then:

all_squash,anonuid=82,anongid=82

To configure those settings in ZFS I configured my export as:

zfs set sharenfs="rw=@192.168.1.98/32,all_squash,anonuid=82,anongid=82" tank/nextcloud

Then, I chowned all the files to match that user in the NFS server as well:

shopt -s dotglob
chown -R 82:82 /tank/nextcloud/html/
shopt +s dotglob

I did used shopt -s dotglob for chown to also change the user/group for the
hidden folders (the ones where the name starts with a dot, such as ~/.ssh)

Tweaks

With everything in place it should work… but it didn’t.

There are a few places where Nextcloud tries to change some files’ modes or
check file permissions and it fails otherwise.

Fortunately, those can be bypased. But let’s take a look at the details first.

console.php

The console.php file has a check to ensure the ownership:

if ($user !== $configUser) { 
  echo "Console has to be executed with the user that owns the file config/config.php" . PHP_EOL; 
  echo "Current user id: " . $user . PHP_EOL; 
  echo "Owner id of config.php: " . $configUser . PHP_EOL; 
  echo "Try adding 'sudo -u #" . $configUser . "' to the beginning of the command (without the single quotes)" .  PHP_EOL; 
  echo "If running with 'docker exec' try adding the option '-u " . $configUser . "' to the docker comman (without  the single quotes)" . PHP_EOL; 
  exit(1); 
} 

I opened a github issue but meanwhile, the fix I did was basically delete that check

cron.php

Same problem:

$configUser = fileowner(OC::$configDir . 'config.php');
if ($user !== $configUser) {
  echo "Console has to be executed with the user that owns the file config/config.php" . PHP_EOL;
  echo "Current user id: " . $user . PHP_EOL;
  echo "Owner id of config.php: " . $configUser . PHP_EOL;
  exit(1);
}

Same fix and another github issue opened.

entrypoint.sh

The container entrypoint script runs an rsync process when Nextcloud is updated.
As part of that rsync process, it uses --chown, which is then forbidden by the NFS server:

rsync: chown "/var/www/html/whatever" failed: Operation not permitted (1)

The github issue and the fix is basically ignore the chown.

quay.io/eminguez/nextcloud-container-fix-nfs

Meanwhile those issues are fixed (not sure if they will), I keep a container image that includes those fixes and that I try to keep it updated for my own sake in GitHub - e-minguez/nextcloud-container-nfs-fix

The image is already available at Quay so feel free to use it if you are having the same issues.

Introducing bunkerized-nginx

I heard about bunkerized-nginx a while ago and I thought it would be nice to use it as a reverse proxy so I can expose my internal services to the internet ‘safely’.

A non-exhaustive list of features (copy & paste from the README):

  • HTTPS support with transparent Let’s Encrypt automation
  • State-of-the-art web security : HTTP security headers, prevent leaks, TLS hardening, …
  • Integrated ModSecurity WAF with the OWASP Core Rule Set
  • Automatic ban of strange behaviors with fail2ban
  • Antibot challenge through cookie, javascript, captcha or recaptcha v3
  • Block TOR, proxies, bad user-agents, countries, …
  • Block known bad IP with DNSBL and CrowdSec
  • Prevent bruteforce attacks with rate limiting
  • Detect bad files with ClamAV
  • Easy to configure with environment variables or web UI
  • Automatic configuration with container labels

A must have for me was having support for Let’s Encrypt and having an easy way to configure it. So this was a perfect match to me!

Firewall ports

As the container is going to be rootless, we need to open a few ports in the host as root. We will use 8080/tcp and 8443/tcp:

sudo -s -- sh -c \
"firewall-cmd -q --add-port=8000/tcp && \
firewall-cmd -q --add-port=8443/tcp && \
firewall-cmd -q --add-port=8000/tcp --permanent && \
firewall-cmd -q --add-port=8443/tcp --permanent"

Then, to run the container you just need to bind to those ports as -p 8000:8080 -p 8443:8443

Directories

To store some files such as the letsencrypt certificates, custom configurations or a cache with the denylists, a few directories are required:

mkdir -p ~/containers/bunkerized-nginx/{letsencrypt,cache,server-confs}

Those will be used as -v ${HOME}/containers/bunkerized-nginx/letsencrypt:/etc/letsencrypt:z -v ${HOME}/containers/bunkerized-nginx/cache:/cache:z -v ${HOME}/containers/bunkerized-nginx/server-confs:/server-confs:ro,z

Parameters

There are TONS of parameters supported by bunkerized-nginx. Some parameters can disable some features, some others enable others, etc. so grab a coffee and take a good look at the README.md file.

In my case:

SERVER_NAME=nextcloud.example.com someothersite.example.com
nextcloud.example.com_REVERSE_PROXY_URL=/
nextcloud.example.com_REVERSE_PROXY_HOST=http://192.168.1.98:8080
nextcloud.example.com_ALLOWED_METHODS=GET|POST|HEAD|PROPFIND|DELETE|PUT|MKCOL|MOVE|COPY|PROPPATCH|REPORT
someothersite.example.com_REVERSE_PROXY_URL=/
someothersite.example.com_REVERSE_PROXY_HOST=http://192.168.1.98:8001
# Multisite reverse
USE_REVERSE_PROXY=yes
MULTISITE=yes
SERVE_FILES=no
DISABLE_DEFAULT_SERVER=yes
REDIRECT_HTTP_TO_HTTPS=yes
AUTO_LETS_ENCRYPT=yes
USE_PROXY_CACHE=yes
USE_GZIP=yes
USE_BROTLI=yes
PROXY_REAL_IP=yes
PROXY_REAL_IP_HEADER=X-Forwarded-For
PROXY_REAL_IP_RECURSIVE=on
PROXY_REAL_IP_FROM=192.168.0.0/16 172.16.0.0/12 10.0.0.0/8
# Nextcloud specific
X_FRAME_OPTIONS=SAMEORIGIN
MAX_CLIENT_SIZE=10G

podman --env-file

Reading the podman man I observed there was an --env-file parameter. So instead of having tens of -e flags, you can warp them up in a file and use just --env-file /path/to/my/envfile SO NICE!!!

systemd service

In order to run the container at boot properly, we just need to create a proper systemd file as a user such as ~/.config/systemd/user/container-bunkerized-nginx.service:

[Unit]
Description=Podman container-bunkerized-nginx.service

[Service]
Restart=on-failure
ExecStartPre=/usr/bin/rm -f /%t/%n-pid /%t/%n-cid
ExecStart=/usr/bin/podman run --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \
-d --restart=always \
-p 8000:8080 \
-p 8443:8443 \
-v /home/edu/containers/bunkerized-nginx/letsencrypt:/etc/letsencrypt:z \
-v /home/edu/containers/bunkerized-nginx/cache:/cache:z \
-v /home/edu/containers/bunkerized-nginx/server-confs:/server-confs:ro,z \
--env-file /home/edu/containers/bunkerized-nginx/scripts/podman.env \
--name=bunkerized-nginx docker.io/bunkerity/bunkerized-nginx:latest
ExecStop=/usr/bin/podman stop -t 10 bunkerized-nginx
ExecStopPost=/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`"
KillMode=none
Type=forking
PIDFile=/%t/%n-pid

[Install]
WantedBy=default.target

Notice that I didn’t use podman generate systemd because it is very specific to the container ID and I wanted more flexibility. You can read more about this in this great Running containers with Podman and shareable systemd services blog post.

Then, enable the service:

systemctl --user daemon-reload
systemctl --user enable container-bunkerized-nginx --now

This will enable the service after the first login of the user and killed after the last session of the user is closed. In order to start it after boot without requiring the user to be logged, it is required to enable lingering as:

sudo loginctl enable-linger username

Note that having the --env-file parameter makes running the container much more convinient, because it is easier to read and you can tweak the parameters in that file and just restart the service as:

systemctl --user restart container-bunkerized-nginx

Otherwise, you will need to modify the systemd unit file, run the daemon-reload command and restart the service.

Exposing it to the internet

As explained in the first post, I’m hosting all this stuff at home so I’ve configured my router, running OpenWRT, to expose only the reverse proxy ports externally (NAT) like so:

config redirect
  option dest_port '8000'
  option src 'wan'
  option name '80'
  option src_dport '80'
  option target 'DNAT'
  option dest_ip '192.168.1.98'
  option dest 'lan'
  list proto 'tcp'

config redirect
  option dest_port '8443'
  option src 'wan'
  option src_dport '443'
  option target 'DNAT'
  option dest_ip '192.168.1.98'
  option dest 'lan'
  list proto 'tcp'
  option name '443'

This means, that the requests incoming from the internet accessing http://my-ip will be redirected to the bunkerized-nginx container listening in port 8000, and requests accessing https://my-ip will be redirected to the bunkerized-nginx container listening in port 8443… and then, depending on the
Host header, they will be redirected to the proper application container.

podman play kube

One of the cool things about podman is that is not just a docker replacement, it can do so much more!

The feature I’m talking about is being able to run Kubernetes YAML pod definitions! How cool is that?

You can read more about this feature in the podman-play-kube man, but essentially, you just need a proper pod yaml definition and podman play kube /path/to/my/pod.yaml will run it for you.

You can even specify a path to a ConfigMap yaml file that contains environmental variables so you can split the config and runtime settings. COOL!

podman generate kube

To create a Kubernetes YAML pod definition based on a container or a pod, you can use podman generate kube and it will generate it for you, there is no need to deal with the complex YAML syntax. See the manual page for podman-generate-kube to learn more about it.

In my case, this is how it looks like:

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nextcloud
  name: nextcloud
spec:
  containers:
  - name: db
    args:
    - --transaction-isolation=READ-COMMITTED
    - --binlog-format=ROW
    command:
    - docker-entrypoint.sh
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: xxx
    - name: MYSQL_DATABASE
      value: nextcloud
    - name: MYSQL_USER
      value: nextcloud
    - name: MYSQL_PASSWORD
      value: xxx
    image: docker.io/library/mariadb:latest
    securityContext:
      allowPrivilegeEscalation: true
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_NET_RAW
        - CAP_AUDIT_WRITE
      privileged: false
      readOnlyRootFilesystem: false
      seLinuxOptions: {}
    volumeMounts:
    - mountPath: /var/lib/mysql
      name: home-edu-containers-nextcloud-data-db
    workingDir: /
  - name: app
    command:
    - php-fpm
    env:
    - name: REDIS_HOST_PASSWORD
      value: xxx
    - name: MYSQL_HOST
      value: 127.0.0.1
    - name: MYSQL_DATABASE
      value: nextcloud
    - name: REDIS_HOST
      value: 127.0.0.1
    - name: MYSQL_USER
      value: nextcloud
    - name: MYSQL_PASSWORD
      value: xxx
    image: quay.io/eminguez/nextcloud-container-fix-nfs:latest
    resources: {}
    ports:
    - containerPort: 80
      hostPort: 8080
      protocol: TCP
    securityContext:
      allowPrivilegeEscalation: true
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_NET_RAW
        - CAP_AUDIT_WRITE
      privileged: false
      readOnlyRootFilesystem: false
      seLinuxOptions: {}
    volumeMounts:
    - mountPath: /var/www/html
      name: home-edu-containers-nextcloud-data-html
    workingDir: /var/www/html
  - name: redis
    command:
    - redis-server
    - --requirepass
    - xxx
    image: docker.io/library/redis:alpine
    resources: {}
    securityContext:
      allowPrivilegeEscalation: true
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_NET_RAW
        - CAP_AUDIT_WRITE
      privileged: false
      readOnlyRootFilesystem: true
      seLinuxOptions: {}
    volumeMounts:
    - mountPath: /tmp
      name: tmpfs
    - mountPath: /var/tmp
      name: tmpfs
    - mountPath: /run
      name: tmpfs
    workingDir: /data
  - name: cron
    image: quay.io/eminguez/nextcloud-container-fix-nfs:latest
    command: ["/cron.sh"]
    resources: {}
    securityContext:
      allowPrivilegeEscalation: true
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_NET_RAW
        - CAP_AUDIT_WRITE
      privileged: false
      readOnlyRootFilesystem: false
      seLinuxOptions: {}
    volumeMounts:
    - mountPath: /var/www/html
      name: home-edu-containers-nextcloud-data-html
    workingDir: /var/www/html
  - name: nginx
    command:
    - nginx
    - -g
    - daemon off;
    image: docker.io/library/nginx:alpine
    resources: {}
    securityContext:
      allowPrivilegeEscalation: true
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_NET_RAW
        - CAP_AUDIT_WRITE
      privileged: false
      readOnlyRootFilesystem: false
      seLinuxOptions: {}
    volumeMounts:
    - mountPath: /var/www/html
      name: home-edu-containers-nextcloud-data-html
    - mountPath: /etc/nginx/nginx.conf
      name: home-edu-containers-nextcloud-data-nginx-nginx.conf
      readOnly: true
    workingDir: /
  restartPolicy: Always
  volumes:
  - hostPath:
      path: /home/edu/containers/nextcloud/data/nginx/nginx.conf
      type: File
    name: home-edu-containers-nextcloud-data-nginx-nginx.conf
  - hostPath:
      path: /home/edu/containers/nextcloud/data/db
      type: Directory
    name: home-edu-containers-nextcloud-data-db
  - hostPath:
      path: /home/edu/containers/nextcloud/data/html
      type: Directory
    name: home-edu-containers-nextcloud-data-html
  - hostPath:
      path: tmpfs
      type: DirectoryOrCreate
    name: tmpfs

Notice that I didn’t tweaked the file and it contains parameters such as allowPrivilegeEscalation and some capabilities that probably can be improved.

systemd unit

Once the yaml file has been created, the systemd unit file is as simple as:

[Unit]
Description=Podman pod-nextcloud.service

[Service]
Restart=on-failure
RestartSec=30
Type=simple
RemainAfterExit=yes
TimeoutStartSec=30

ExecStartPre=/usr/bin/podman pod rm -f -i nextcloud
ExecStart=/usr/bin/podman play kube \
  /home/edu/containers/nextcloud/scripts/nextcloud.yaml

ExecStop=/usr/bin/podman pod stop nextcloud
ExecStopPost=/usr/bin/podman pod rm nextcloud

[Install]
WantedBy=default.target

Then, enable the service:

systemctl --user daemon-reload
systemctl --user enable pod-nextcloud.service --now

Updating Nextcloud

The process I do to update Nextcloud is basically:

  • Review if there are any changes in the console.php, cron.php or
    entrypoint.sh files, and if so, fix them and build a new
    Quay image
  • Review if there are any changes in the nginx.conf, and if so, update the
    ~/containers/nextcloud/nginx/nginx.conf file

Then, I run the following script:

#!/bin/bash
export DIR="/home/edu/containers/nextcloud/"

systemctl --user stop pod-nextcloud
# Just to make sure
podman pod stop nextcloud
podman rm $(podman ps -a | awk '/nextcloud/ { print $1 }')
podman pod rm nextcloud

for image in docker.io/library/mariadb:latest docker.io/library/redis:alpine docker.io/library/nginx:alpine k8s.gcr.io/pause:3.2 quay.io/eminguez/nextcloud-container-fix-nfs:latest; do
  podman rmi ${image}
  podman pull ${image}
done
systemctl --user start pod-nextcloud

Final words

During those blog posts I’ve tried to explain how I managed to setup my
Nextcloud deployment at home using podman rootless containers. If you have read
those post till the end, I hope you enjoyed it and thank you so much to dedicate
a few minutes to read them.

If you have any question or improvement, you can reach me at
@minWi

1 Like

I’ve just realized this post will fit better in the howto category but I don’t have enough level to publish there :slight_smile:

Sorry, the nginx.conf link is failure,can you provide another one to me ? Thanks :smiley:

OK, now i find it,the link is here,but I don’t know if it is right

Is that one. It seems they restructured the examples in the repo and moved things around.