Thank you very much @estebanium. It is working now. Your indications clearly told me that something was not right with the router. And you were right. Itās a pretty new router that needed a firmware update. The port forward did not work.
Glad to hear, that it is working for you now. Youāre welcome. As an important part, I would recommend you a proper backup solution. If your filesystem is zfs or btrfs, you have great tools to create snapshots of your data. If your db becomes corrupted for example, or even more important your stored files.
You could also create those snapshots before an update of your containers. Even with EXT4 you will find a good backup solution. I speak from personal experiences. 99 % of the time, you will not need it, but the 1% can break your neck.
Thank you. Yes I spent lot of time trying to figure out convenient tools, and my choice was restic. Now this leads to one question : I faced very often issues when upgrading the docker images. I can see that you set specific versions. But overtime some will become obsolete and it is not possible to skip major Nextcloud releases in one step. What is your upgrade strategy ? do you change the image versions (nexcloud, caddy,ā¦) very often ? all at once ?
Hi there and thank you for this great guide! It was very easy to follow and worked great. However I found a small issue that nonetheless can bear great frustration. The issue is with the passwords you generate:
As I experienced, if the the Redis password contains characters that need to be escaped in URLs then nextcloud cannot connect to Redis. The symptom of this is, that if you try to login, then it just does not work (you stay at the login screen and it counts as if you entered wrong credentials).
I replaced all special characters in my redis password and then it worked. I assume that only the & is problematic but probably it would be safer to eliminate all special characters and just make the passwords longer. Also I donāt know whether the prostgres password also has this problem.
3 posts were split to a new topic: Upgrade problem with Nextcloud docker compose setup with Caddy (2024)
I would like to suggest a few more tweaks
- set the network mode for port 443 to
hostand set up a health check
ports:
- target: 80
published: 80
protocol: tcp
# mode: host
- target: 443
published: 443
protocol: tcp
mode: host
cap_add:
- NET_BIND_SERVICE
healthcheck:
test: nc -zv 127.0.0.1 443
- Then in Caddyfile forward the real IP and set up healthcheck probes
reverse_proxy nextcloud:80 {
header_up X-Real-IP {http.request.remote.host}
health_uri /ocs/v2.php/apps/serverinfo/api/v1/info
health_headers {
NC-Token yourtoken
}
}
- You can set up multiple replicas of the main nextcloud service and Caddy should load balance it as long as itās DNS-RR
deploy:
replicas: 2
endpoint_mode: dnsrr
resources:
reservations:
memory: 1.5g
limits:
memory: 1.5g
Do the proxy network defined in docker way at the beginning need to match the proxy definitions throughout the config? the first proxy defined in docker is 172.16.0.0/24, but elsewhere its 172.16.0.0/12. One issue Iāve run into a few times is that in my youth I had defined our networks as a set of /24 networks in the 172.24 range, like 172.24.4.0/24 and 172.24.6.0/24. Mostly this hasnāt been a problem except for a few instances that an appliance used 172.16.0.0/12 internally and therefore wouldnāt route to my network since it saw it as local. If I change everything to 172.16.0.0/24 would that work?
Many thanks to everyone for your messages and comments. I will answer them in a slightly longer post.
Thank you for your message, abraemer. From my experience so far, I donāt share your concern. Using Docker Secrets, Redis command correctly escaped, no Redis URLs to connect to, the risk of being affected should be relatively low. However, as development continues in all areas, I will include it in a Q/A.
Hello trajano, I canāt quite follow your thoughts. What are you setting ports for? For the reverse proxy? If so, this is already described under The Reverse Proxy Stack: Creation of the required files. The proxy already has a health check there.
The fact that you are using NET_BIND_SERVICE leads me to believe that you are running Docker rootless? However, it would be more helpful for newcomers if you pointed this out. Otherwise, itās all too easy to take the snippet and then run into the next problem.
If you were purely interested in health checks, then the Nextcloud Stack (docker-compose.yml) already uses one for the web server, for Redis and the database too. Something like this would be conceivable for the app:
app:
...
healthcheck:
test: curl -sSf 'http://web/status.php' | grep '"installed":true' | grep '"maintenance":false' | grep '"needsDbUpgrade":false' || exit 1
interval: 30s
timeout: 5s
retries: 3
As far as load balancing is concerned, this is definitely an interesting thing. Please note: If you use health_headers with tokens/secrets, these may be visible in the docker inspect output - so they may not be suitable for productive secrets.
Thanks CThomas for the hint. This is indeed a bit confusing and, according to my initial goal, not quite correctly mapped:
docker network create proxy --subnet=172.16.0.0/24 creates the subnet 172.16.0.1 to 172.16.0.254.
This is a deliberate choice and must/can be changed as described if exactly this subnet is already being used.
trusted_proxies static 172.16.0.0/12 is in fact incorrect and should actually be trusted_proxies static 172.16.0.0/24
We create a proxy network and specify the subnet to be used. Therefore, we can also specify very explicitly for trusted_proxies with which IP range we will receive a request. Therefore, it should be trusted_proxies static 172.16.0.0/24.
Why did I use trusted_proxies static 172.16.0.0/12? Two things:
- it is a careless mistake
- the origin of the carelessness error is that my tutorial still has its roots in wweās tutorial
wwe only creates the proxy network with docker network create proxy. So it is not at all clear what the subnet of the proxy network is, and can be anywhere in the range of 172.17.0.0/16: The default IP range for Docker (when using the default bridge network) is 172.17.0.1 to 172.17.255.254
Then my careless mistake comes into play, because the correct configuration of the web server caddy would be trusted_proxies static 172.17.0.0/16.
Iāll change the tutorial, but hope to give you some more clarity about this.
EDIT: I forgot to mention TRUSTED_PROXIES in nextcloud.env. The same applies to TRUSTED_PROXIES, so it should be TRUSTED_PROXIES=172.16.0.0/24
@estebanium the
- target: 443
published: 443
protocol: tcp
mode: host
allows Caddy to know the source IP. otherwise it will only know the NATed IP.
Reserved IP addresses - Wikipedia specifies that the 172.16.0.0/12 is masked on 12 not 24. Youād notice that after a few restarts the default IP Docker provides may start with 172.someothernumber
Ah, now I know what you mean. So you say, that you donāt use the Proxy Caddy and that you publish the Nextcloud Web Caddy directly on the host?
To ensure that the web server receives the requesting Internet IP address from the proxy caddy (203.0.113.123) and not IP address of the proxy caddy (172.16.0.7), the entry trusted_proxies static 172.17.0.0/16 in the Caddyfile of the web server is there precisely for this purpose. Therefore I do not understand your post.
See here: Global options (Caddyfile) ā Caddy Documentation
I am referring to the default setting of Docker. Which has the range /16.
If you limit it, youāre assuming that the network that gets created will be 172.XX.xx.xx but the network that gets created can be a different subnet. Try this⦠create a compose with more than one network and docker network inspect the_networks | grep 172 you would notice that though the subnet mask is 16 the second quad is random and not fixed unless you lock down the IP range.
Iāve tweaked my set up a bit more. You may want to consider adding a pgbouncer to do connection pooling on postgresql
postgres:
image: mirror.gcr.io/edoburu/pgbouncer
environment:
DB_HOST: real_postgresql
DB_USER: xxx
DB_PASSWORD: xxx
DB_NAME: xxx
AUTH_TYPE: scram-sha-256
depends_on:
- real_postgresql
healthcheck:
test: ['CMD', 'pg_isready', '-h', 'localhost']
deploy:
resources:
reservations:
memory: 256m
limits:
memory: 256m
Ah, now I know what you mean. So you say, that you donāt use the Proxy Caddy and that you publish the Nextcloud Web Caddy directly on the host?
Ah now I understand your infrastructure a bit more. In your case you expose a Caddy outside of Docker, in my case I put everything inside Docker.
Thank you for the clarification, Iām not entirely familiar with the working of docker/caddy/NC so I wasnāt sure if it played differently behind the scenes.
Iām using your guide here along with a couple of others to build out the setup I want for our system. I want to add a couple of containers for the Sendent Sync system to link in our Exchange server, and a CRM container to help my sales staff keep on track with their customers/prospects. Iāll need Caddy for the CRM especially.
If I wanted to use more recent build of the apps, would I just swap them out in the .env file? For example, change NEXTCLOUD_VERSION=29.0.7-fpm to NEXTCLOUD_VERSION=31.0-fpm or NEXTCLOUD_VERSION=31.0.5-fpm ?
I also see PHP memory limit is upped, would that help me with linking Exchange over IMAP when we have large mailboxes? I have a few tens of thousands of messages in my inbox and Mail baulks at processing that many, throwing errors in the NC log.
Setup notify_push
Thanks for this one. Took a bit of time to get it working on my setup as I am on a Pi and found out the hard way that this notify_push image is not available in arm64 so I just built my own Update docker.yml Ā· trajano/notify_push@ea97b2f Ā· GitHub
If I wanted to use more recent build of the apps, would I just swap them out in the .env file? For example, change
NEXTCLOUD_VERSION=29.0.7-fpmtoNEXTCLOUD_VERSION=31.0-fpmorNEXTCLOUD_VERSION=31.0.5-fpm?I also see PHP memory limit is upped, would that help me with linking Exchange over IMAP when we have large mailboxes? I have a few tens of thousands of messages in my inbox and Mail baulks at processing that many, throwing errors in the NC log.
Could I up the value for PHP_MEMORY_LIMIT=1G to 2G or 4G if the server has the resources? I have problems running mail linking into my Exchange server via IMAP and it throws errors all over the place when trying to use it.
I would also like to use a newer version of NC and PostGreSQL. If I put NEXTCLOUD_VERSION=31.0.5-fpm in the nextcloud.env and use image: postgres in the db section of the docker-compose file would that do what I want?
Just playing around with my custom docker and Caddy build myself. Iām the opposite of yours (Iām on a Pi)
But rather than upping the memory limit, increase the number of pm.max_children in php-fpm/conf.d/zzz-max-children.ini or make it static. In my Dockerfile code I did this
COPY <<EOF /usr/local/etc/php-fpm.d/zzz-max-children.conf
[www]
pm.max_children = 20
EOF
And then on the Caddyfile for the external server I added a http transport limit to be 20 per host that way the Caddy + PHP-FPM that is being proxied wonāt exceed the children amount.
Iām still playing around with it Comparing nextcloud:master...trajano:master Ā· nextcloud/docker Ā· GitHub but I may abandon this project and switch to Immich as So long, Nextcloud - #4 by saettel.beifuss0 had recommended.
I will kind of miss the OneDrive like functionality and shared contact list feature (useful for the family to keep doctors updated). But the main reason for this project for me was because of Google disabling key APIs for gphoto-sync.
I kind of like the concept of this NextCloud still given the week I spent with it. But silly things like Vue/ReactJS errors on the console and deprecation in log messages are irking me now as a dev.