Setting up Nextcloud to be served under /cloud subfolder with Caddy, and other problems

So, uh, I have quite a setup I’ve been trying to serve on my personal network. I’ll try my best to provide what details I can.

I have an Ubuntu server, with Docker installed via Snap. Caddy is installed directly in Ubuntu itself, and I’ve been using it to serve various services running in Docker to specific subfolders. I’m using Avahi to resolve hostname rather than a DNS server. Obviously, this would mean I only intend Nextcloud to only be accessible within my wifi network rather than public.

I’ve been struggling with two problems:

  1. Serving Nextcloud at hostname.local/cloud, and
  2. getting the Nextcloud docker container itself to connect to Mariadb.

In regards to serving under /cloud, I’ve been using Portainer and putting this compose configuration file as stack:

version: '3.8'

volumes:
  nextcloud:
  db:

services:
  db:
    image: mariadb:10.6
    restart: always
    ports:
      - 3306:3306
    command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
    volumes:
      - db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=pass1
      - MYSQL_PASSWORD=pass2
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud

  app:
    image: nextcloud
    restart: always
    ports:
      - 8383:80
    links:
      - db
    volumes:
      - /media/www/nextcloud/data:/var/www/html/data
      - nextcloud:/var/www/html
      - /media/www/nextcloud/config/.htaccess:/var/www/html/.htaccess
    environment:
      - MYSQL_PASSWORD=pass2
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud
      - MYSQL_HOST=db

For my Caddyfile in Ubuntu, I have:

hostname.local, 10.0.0.171 {
  tls /hostname.local.cert.pem /hostname.local.key.pem
  encode zstd gzip

  # Handle subfolders
  # Here's portainer
  handle_path /docker/* {
    reverse_proxy https://localhost:9443 {
      transport http {
        tls_insecure_skip_verify
      }
    }
  }
  # Here's Nextcloud...eventually
  handle /cloud* {
    reverse_proxy localhost:8383
  }

  # Otherwise, host PHP
  root * /var/www/php/
  php_fastcgi unix//run/php/php8.1-fpm.sock
  file_server
}

As made obvious by this line in compose – - /media/www/nextcloud/config/.htaccess:/var/www/html/.htaccess – I’ve attempted to configure Apache with the following .htaccess file below. It just copy-pastes what was already in the container, but attempts to change all the rewrite rules to /cloud.

<IfModule mod_headers.c>
  <IfModule mod_setenvif.c>
    <IfModule mod_fcgid.c>
       SetEnvIfNoCase ^Authorization$ "(.+)" XAUTHORIZATION=$1
       RequestHeader set XAuthorization %{XAUTHORIZATION}e env=XAUTHORIZATION
    </IfModule>
    <IfModule mod_proxy_fcgi.c>
       SetEnvIfNoCase Authorization "(.+)" HTTP_AUTHORIZATION=$1
    </IfModule>
    <IfModule mod_lsapi.c>
      SetEnvIfNoCase ^Authorization$ "(.+)" XAUTHORIZATION=$1
      RequestHeader set XAuthorization %{XAUTHORIZATION}e env=XAUTHORIZATION
    </IfModule>
  </IfModule>

  <IfModule mod_env.c>
    # Add security and privacy related headers

    # Avoid doubled headers by unsetting headers in "onsuccess" table,
    # then add headers to "always" table: https://github.com/nextcloud/server/pull/19002
    Header onsuccess unset Referrer-Policy
    Header always set Referrer-Policy "no-referrer"

    Header onsuccess unset X-Content-Type-Options
    Header always set X-Content-Type-Options "nosniff"

    Header onsuccess unset X-Frame-Options
    Header always set X-Frame-Options "SAMEORIGIN"

    Header onsuccess unset X-Permitted-Cross-Domain-Policies
    Header always set X-Permitted-Cross-Domain-Policies "none"

    Header onsuccess unset X-Robots-Tag
    Header always set X-Robots-Tag "none"

    Header onsuccess unset X-XSS-Protection
    Header always set X-XSS-Protection "1; mode=block"

    SetEnv modHeadersAvailable true
  </IfModule>

  # Add cache control for static resources
  <FilesMatch "\.(css|js|svg|gif|png|jpg|ico|wasm|tflite)$">
    Header set Cache-Control "max-age=15778463"
  </FilesMatch>

  <FilesMatch "\.(css|js|svg|gif|png|jpg|ico|wasm|tflite)(\?v=.*)?$">
    Header set Cache-Control "max-age=15778463, immutable"
  </FilesMatch>

  # Let browsers cache WOFF files for a week
  <FilesMatch "\.woff2?$">
    Header set Cache-Control "max-age=604800"
  </FilesMatch>
</IfModule>

# PHP 7.x
<IfModule mod_php7.c>
  php_value mbstring.func_overload 0
  php_value default_charset 'UTF-8'
  php_value output_buffering 0
  <IfModule mod_env.c>
    SetEnv htaccessWorking true
  </IfModule>
</IfModule>

# PHP 8+
<IfModule mod_php.c>
  php_value mbstring.func_overload 0
  php_value default_charset 'UTF-8'
  php_value output_buffering 0
  <IfModule mod_env.c>
    SetEnv htaccessWorking true
  </IfModule>
</IfModule>

<IfModule mod_mime.c>
  AddType image/svg+xml svg svgz
  AddType application/wasm wasm
  AddEncoding gzip svgz
</IfModule>

<IfModule mod_dir.c>
  DirectoryIndex index.php index.html
</IfModule>

<IfModule pagespeed_module>
  ModPagespeed Off
</IfModule>

<IfModule mod_rewrite.c>
  RewriteEngine on
  RewriteCond %{REQUEST_URI} !^/cloud/
  RewriteRule ^(.*)$ /cloud/$1 [L,R=301]
  RewriteCond %{HTTP_USER_AGENT} DavClnt
  RewriteRule ^$ /cloud/remote.php/webdav/ [L,R=302]
  RewriteRule .* - [env=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
  RewriteRule ^\.well-known/carddav /cloud/remote.php/dav/ [R=301,L]
  RewriteRule ^\.well-known/caldav /cloud/remote.php/dav/ [R=301,L]
  RewriteRule ^remote/(.*) /cloud/remote.php [QSA,L]
  RewriteRule ^(?:build|tests|config|lib|3rdparty|templates)/.* - [R=404,L]
  RewriteRule ^\.well-known/(?!acme-challenge|pki-validation) /cloud/index.php [QSA,L]
  RewriteRule ^(?:\.(?!well-known)|autotest|occ|issue|indie|db_|console).* - [R=404,L]
</IfModule>

AddDefaultCharset utf-8
Options -Indexes

So yeah, any help on all the above would help. I think I had played around a tiny bit with the environment variable, OVERWRITEWEBHOSTROOT or something like that in previous attempts as well?

In regards to connecting with Mariadb, if I attempt to run any connection from the Nextcloud container to db, the connection always times-out. What’s up with that?

$ sudo docker exec -it nextcloud_app_1 /bin/bash
# curl -v telnet://db:3306 --connect-timeout 10
*   Trying 172.28.0.2:3306...
* Connection timed out after 10001 milliseconds
* Closing connection 0
curl: (28) Connection timed out after 10001 milliseconds

Just in case, I did run sudo ufw allow 3306/tcp, but that does not appear to solve the issue.

Hi japtar10101 :wave:

Thanks for posting, and welcome to the forum :slight_smile:

tl;dr

If you haven’t already, I think you should probably ask on the caddy forum.

more

First, a disclaimer: I’m familiar with docker - I use it in a professional context - but I run Nextcloud on bare metal. So I’m not the best person to comment on that part.

I don’t know caddy at all. It looks interesting, though, so thank you for introducing me to something new :+1:

…However…

This is probably personal preference, but I do have slight reservations about it (EDIT: caddy, I mean). IME “simplify your THING” can be a double-edged sword: all too often, it implies “complicate troubleshooting your THING”. Which is fine if you’re paying someone else to do the troubleshooting :smiley: …but less good if it’s your job :frowning:

For instance: My mail server has quite a few moving parts, and a correspondingly high number of failure modes. But because I set all those parts up - and documented things as I went along - I have a pretty good idea where to start whenif it goes wrong: how to test, which log files to grep for which terms etc. I’m not confident I’d have the same level of insight if I’d installed an “off the peg” solution.

Having said all that…

I’m using Avahi to resolve hostname rather than a DNS server

Discuss, 20 marks :thinking:

Why, AAMOI? Can your router not do static assignments? (Just wondering).

More Qs:

  • can you connect via IP instead of hostname?
  • can you serve anything at http://hostname.local? (e.g. a generic, default web server)

If the connection to Maria hangs, that suggests either:

  1. the credentials are wrong, or
  2. (more fundamentally) Maria isn’t accessible to Nextcloud

What do the Nextcloud logs say about this?

hth

Best of luck.

Thanks for the reply. In regards to each question:

Why, AAMOI? Can your router not do static assignments? (Just wondering).

Honestly, just beginner’s review of Ubuntu’s documentation on setting up a web server. In fairness, my current router is provided by the ISP, and while it does allow assigning specific IP addresses per machine, it does not allow customizing the IP address to the DNS server.

It’s annoying: I can’t create subdomains that Window’s Bonjour recognizes.

can you connect via IP instead of hostname?

To Nextcloud? No. Like hostname.local/cloud gives me a generic Apache 404 page, 10.0.0.171/cloud gives me the same results.

Yes! Not only can I serve a generic HTML or PHP webpage, but I’ve also been able to serve docker containers of Portainer at hostname.local/docker, as seen in Caddyfile.

What do the Nextcloud logs say about this?

I, uh, only 404 on opening what I was hoping would serve Nextcloud, so the logs itself isn’t useful in judging why Nextcloud can’t access Mariadb:

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.22.0.3. Set the 'ServerName' directive globally to suppress this message

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.22.0.3. Set the 'ServerName' directive globally to suppress this message

[Sun Apr 02 05:58:35.221987 2023] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.54 (Debian) PHP/8.1.17 configured -- resuming normal operations

[Sun Apr 02 05:58:35.222037 2023] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'

172.22.0.1 - - [02/Apr/2023:09:15:48 +0000] "GET /cloud HTTP/1.1" 301 709 "https://10.0.0.171/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/111.0"

172.22.0.1 - - [02/Apr/2023:09:15:48 +0000] "GET /cloud/cloud HTTP/1.1" 404 619 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/111.0"

172.22.0.1 - - [02/Apr/2023:09:15:53 +0000] "GET /cloud/ HTTP/1.1" 404 619 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/111.0"

In previous attempts, where I reconfigured Caddy to serve Nextcloud to hostname.local as a test, the installation wizard would appear, but on attempting to configure Mariadb, I’d get a unable to connect SQLSTATE[HY000] [2002] message, which given how long it takes, I assumed was a timeout than a credential error. I seem to recall fiddling with both root and nextcloud password, but it appeared the root password wasn’t the correct one to use.

Incidentally, there are a couple other services I’m running, also served through Caddy, including Jellyfin, Gollum, Gitea, and Vaultwarden (I snipped those parts out of the Caddyfile in OP for brevity.) I double-checked if any had Mariadb or other services that could potentially take up port 3306, but I don’t see any from Docker, at least. I also had Mariadb installed on Ubuntu itself, but running sudo systemctl stop mariadb doesn’t seem to affect that connection, either.

I don’t use caddy but took a look at

looks like you need to move a } from line 14 to the bottom under the last }

but this is my instinct after quickly reading the tut.

instead of connection to something you do not know is there check the servers open ports

sudo netstat -tulpen

this will show a list with all open ports on your server. if port 3306 is not there, connecting to it has no use.

the .htaccess file should be written by occ maintenance:update:htaccess

imo you should first setup a webserver that would listen to http(s)://ip.ip.ip.ip/cloud/test.htm

once you reach that and netstat says 3306 is open you can connect to mysql using mysql and root password or pass1

mysql -u root -p

root password will be requested and once entered you can quit with \q

you can now configure nextcloud or run the installer. maybe create separate topics per issue or handle one at a time.

my question as non docker/caddy user is why there are so many directories used I see

/var/www/html
/var/www/html/.htaccess
/var/www/html/data
/media/www/nextcloud/data
/media/www/nextcloud/config/.htaccess

so where is it actually hosted from.

on attempting to configure Mariadb, I’d get a unable to connect SQLSTATE[HY000] [2002] message, which given how long it takes, I assumed was a timeout

I suspect you’re right, but I’m not sure.

This may be an obvious question, but where is Maria actually running?

If you open a shell session within the container, can you actually connect to Maria?

(BTW, I don’t understand this part: “it does not allow customizing the IP address to the DNS server”; do you mean instructing the router to use different nameservers from the default one(s) assigned by your ISP? I’m also not sure of the relevance of Ubuntu’s documentation on setting up a web server(?) But I don’t want to get distracted; those are secondary issues, I think).

Use pihole.

Set DNS server under DHCP in router to pihole - or:

Set manuel DNS under connection on your device. However I have never seen a router where you cannot define your own DNS name servers in DHCP server?

May I suggest:
Setup PiHole docker as DHCP and DNS server. Make sure DHCP points to router as gateway, and DNS is pihole itself.
Now define A records as you please, and define any open public DNS servers as DNS forward. Viola! You dont use ISP router locked-down DHCP.

I’ll play around with Caddy in a moment, but since I can get to the Mariadb stuff more quickly, in regards to your suggestions:

Thanks, sounds like a good place to first trouble-shoot. On the Ubuntu host, at least, it does appear that port is open by docker:

 $ sudo netstat -tulpen | grep 3306
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      0          207414     142258/docker-proxy
tcp6       0      0 :::3306                 :::*                    LISTEN      0          207417     142266/docker-proxy

However…

This doesn’t appear to work.

 $ mysql -u root
ERROR 2002 (HY000): Can't connect to local server through socket '/run/mysqld/mysqld.sock' (2)
 $ mysql -u root -p
Enter password:
ERROR 2002 (HY000): Can't connect to local server through socket '/run/mysqld/mysqld.sock' (2)

Weird. ETA: wait, silly me, I forgot to add the host argument to the command, given Mariadb is being served within a Docker container and not the Ubuntu server itself. I’ve changed my command a bit, and it succeeded:

 $ mysql -h 127.0.0.1 -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 6
Server version: 10.6.12-MariaDB-1:10.6.12+maria~ubu2004-log mariadb.org binary distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> \q
Bye

ETA, cont.: This makes me suspect something fishy is going on with either how Docker configures the networks for db and app, or somehow the links: line below isn’t working as intended:

services:
  db:
#snip
  app:
    image: nextcloud
#snip
    links:
      - db

In the Nextcloud docker container itself, apt just doesn’t work, let alone install netstate and mysql to test any of the above commands.

$ sudo docker exec -it nextcloud_app_1 /bin/bash
root@32defa7aecdd:/var/www/html# mysql
bash: mysql: command not found
root@32defa7aecdd:/var/www/html# netstat 
bash: netstat: command not found
root@32defa7aecdd:/var/www/html# apt install net-tools
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package net-tools

ETA: I double-checked if db was able to run mysql, just in case, and as expected, it can:

$ sudo docker exec -it nextcloud_db_1 /bin/bash
# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.6.12-MariaDB-1:10.6.12+maria~ubu2004-log mariadb.org binary distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> \q
Bye

ETA: apologize for this long-winded attempts at troubleshooting Mariadb. Now that I know at least Ubuntu can connect to the database via 127.0.0.1:3306, I tried doing something similar within Nextcloud’s container, and it appears to work. I just need to enter the server’s IP address instead of localhost.

$ sudo docker exec -it nextcloud-app-1 /bin/bash
root@32defa7aecdd:/var/www/html# curl -v telnet://10.0.0.171:3306 --connect-timeout 10
*   Trying 10.0.0.171:3306...
* Connected to 10.0.0.171 (10.0.0.171) port 3306 (#0)
Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Warning: <FILE>" to save to a file.
* Failure writing output to destination
* Closing connection 0
root@32defa7aecdd:/var/www/html# curl -v telnet://127.0.0.1:3306 --connect-timeout 10
*   Trying 127.0.0.1:3306...
* connect to 127.0.0.1 port 3306 failed: Connection refused
* Failed to connect to 127.0.0.1 port 3306: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 3306: Connection refused

In regards to where files are hosted

OK, that’s good to know. Reading through Nextcloud’s doc, I was getting the impression that the best way to server Nextcloud under a subfolder like /cloud was to configure Apache, which lead me to overwrite .htaccess content. If it should be overwritten by Nextcloud’s own code, though, you’re right, I should probably get rid of that, and try something else.

I have, for example, attempted to host Nextcloud under /var/www/html/cloud before, with actually some mixed success. The time-out with Mariadb became the main blocker with that method.

Great question. So, here’s what I was hoping to setup:

  • Have Nextcloud’s PHP code reside in Docker’s own volume:
    volumes:
      nextcloud:
    
  • Make the files uploaded to Nextcloud reside in my external drive, mounted at /media/www, and under the folder, nextcloud/data

In my poor understanding of Docker Compose:

volumes:
  nextcloud:
  db:

Above lines create persistent virtual volumes nextcloud and db within docker. I haven’t figured out how to access the content of these volumes yet, but I think not caring is the point. More importantly, setting things up this way means Docker won’t nuke content held in these volumes (unless I explicitly tell it to) when I either reload nextcloud in docker or restart the server as regular updating maintenance.

services:
  db:
    image: mariadb:10.6
# snip
    volumes:
      - db:/var/lib/mysql

Above spins up a virtual machine, copying a Mariadb image, version 10.6, with the Docker volume, db mounted to /var/lib/mysql. I assume this is to keep the database held within db. By the same token:

services:
# snip
  app:
    image: nextcloud
# snip
    volumes:
      - /media/www/nextcloud/data:/var/www/html/data
      - nextcloud:/var/www/html
      - /media/www/nextcloud/config/.htaccess:/var/www/html/.htaccess

This spins up a Nextcloud image, with virtual volume nextcloud mounted to /var/www/html, physical external drive /media/www/nextcloud/data to /var/www/html/data (which apparently you can do that without any conflicts?) and file /media/www/nextcloud/config/.htaccess to /var/www/html/.htaccess (I will get rid of this last one.) So this is a long-winded way of me saying as far as I can tell, Nextcloud is being hosted under /var/www/html with Apache (comes with the image,) which the content is from Docker’s virtual volume, nextcloud.

It’s not the first time Comcast screwed over their customers. Yes, they literally hardcode their router’s DNS server.

1 Like

ETA: this post is in reply to this:

Alright, I ran a bunch of tests with Caddy to see if I can get a better understanding of how it handle things. Results were quite interesting, and hopefully gives anyone here helping out to come up with an idea on making this work with Nextcloud Docker container.

So I backed up my /etc/caddy/Caddyfile, and then changed its content to this below (note, I’m using hostname as placeholder for the server’s actual hostname):

hostname.local, 10.0.0.171 {
	tls /hostname.local.cert.pem /hostname.local.key.pem
	encode zstd gzip

	# Keeping Portainer around
	handle_path /docker/* {
		reverse_proxy https://localhost:9443 {
			transport http {
				tls_insecure_skip_verify
			}
		}
	}

	# Handle subfolders
	handle /cloud1* {
		root * /var/www/html/
		php_fastcgi unix//run/php/php8.1-fpm.sock
		file_server
	}

	handle_path /cloud2* {
		root * /var/www/html/
		php_fastcgi unix//run/php/php8.1-fpm.sock
		file_server
	}

	handle /cloud3* {
		reverse_proxy localhost:8888
	}

	handle_path /cloud4* {
		reverse_proxy localhost:8888
	}

	# Otherwise, host the PHP dashboard website
	root * /var/www/php/
	php_fastcgi unix//run/php/php8.1-fpm.sock
	file_server
}

In particular, I wanted to get a better understanding of handle vs handle_path. In /var/www/html, I’ve added this index.php file:

<!DOCTYPE html>
<html>
<head>
	<meta charset="utf-8">
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<title>Test</title>
</head>
<body>
	<h1>It worked!</h1>
	<p>
		Request URI: <?php echo parse_url($_SERVER['REQUEST_URI'], PHP_URL_PATH); ?>
	</p><p>
		Current directory: <?php echo dirname(__FILE__); ?>
	</p><p>
		Test link: <a href="/">back to root</a>
	</p>
</body>
</html>

I also ran a quick Ecosia search, and it gave me this simple image to test hosting a simple PHP site from docker. I made a quick docker-compose.yml file at ~/caddy/git below, created a test.php file with the same content as above, then ran sudo docker compose up -d:

version: "3.7"

services:
  php:
    image: trafex/php-nginx
    restart: always
    ports:
      - 8888:8080
    volumes:
      - ./test.php:/var/www/html/index.php

So what happens? Entering hostname.local/cloud1 gives me:
firefox_RLASJSy2Kr

For hostname.local/cloud2, which used handle_path:
image

So when hosting PHP content from server directly, there appears to be no different between handle and handle_path. When Caddy is used as a reverse proxy for the Docker containers, though, a difference does emerge. Here’s hostname.local/cloud3, which uses handle:
image

And hostname.local/cloud4, which uses handle_path:
image

So handle_path takes out /cloud4, while handle leaves it in.

I’ve also added a link in all the test PHP files, and clicking on them will lead back to the content of /var/www/php. This is intentional on my part: I am hosting a custom PHP website, there. I do recall, however, when I changed the original Caddyfile from:

  handle /cloud* {
    reverse_proxy localhost:8383
  }

to

  handle_path /cloud* {
    reverse_proxy localhost:8383
  }

The installation page will appear, but without the images, CSS, or javascript files being loaded. A quick glance at the console, it looked like the install page were trying to load these files from hostname.local instead of hostname.local/cloud.

ETA: replicated it, here’s what I mean:

ETA2: As an aside, I’ve actually known for a bit that Nextcloud had this bit of code under /var/www/html/config/reverse-proxy.config.php:

$overwriteWebRoot = getenv('OVERWRITEWEBROOT');
if ($overwriteWebRoot) {
  $CONFIG['overwritewebroot'] = $overwriteWebRoot;
}

I could try testing tomorrow what happens if I add OVERWRITEWEBROOT=/cloud in docker-compose.yml. I could also try mounting virtual volume nextcloud onto /var/www/html/cloud as well. Dunno, so long as it’s Docker-related, it doesn’t hurt to try.

I…actually got this to work! Took a LOT of trial and error, but on first blush, all the assets on the site appears to be loading. I’ll play around with it a bit more, but before I forget, I’ll try to post my solution here. In broad strokes, here’s what I ended up doing:

  1. Figure out the host/IP address of database image in docker. For my server, it turned out to be 10.0.0.171:3306 (the local, intranet IP address, and Mariadb’s default port.)
  2. Fix the ownership and permission on the directory that I’ll be mounting as /var/www/html/data/ so Nextcloud can access it.
  3. Configure Caddy and UFW (firewall) so Nextcloud can be hosted on 10.0.0.171:8283 (or whatever open port you have available.)
  4. Open 10.0.0.171:8283 in the web-browser, install Nextcloud.
  5. Create a custom config.php file, and in Portainer, update the Nextcloud stack to mount the new config file.
  6. Re-write Caddy to now host Nextcloud at hostname.local/cloud.

1. Database Host/IP

So while I had the docker compose stuff still up, in the quote above, I did play around with Nextcloud container a bit to figure out what IP address it could access. I ended up using curl in this case, since there were very, very few tools the container had installed. Anyway, I basically ran:

$ sudo docker exec -it nextcloud-app-1 /bin/bash

To access the container, then in that container, ran:

# curl -v telnet://10.0.0.171:3306 --connect-timeout 10
*   Trying 10.0.0.171:3306...
* Connected to 10.0.0.171 (10.0.0.171) port 3306 (#0)
Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Warning: <FILE>" to save to a file.
* Failure writing output to destination
* Closing connection 0

Despite the last line, I knew I was onto something, because typically if curl couldn’t connect to an IP, it’d give a Connection refused error instead of Binary output message:

# curl -v telnet://127.0.0.1:3306 --connect-timeout 10
*   Trying 127.0.0.1:3306...
* connect to 127.0.0.1 port 3306 failed: Connection refused
* Failed to connect to 127.0.0.1 port 3306: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 3306: Connection refused

By the way, to find the server’s IP address, use

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp3s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether ac:87:a3:00:c4:46 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.171/24 metric 100 brd 10.0.0.255 scope global dynamic enp3s0f0
       valid_lft 165309sec preferred_lft 165309sec
    inet6 2601:249:8300:6570::67ce/128 scope global dynamic noprefixroute
       valid_lft 431956sec preferred_lft 431956sec
    inet6 2601:249:8300:6570:ae87:a3ff:fe00:c446/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 345600sec preferred_lft 345600sec
    inet6 fe80::ae87:a3ff:fe00:c446/64 scope link
       valid_lft forever preferred_lft forever

For me, I knew the second entry was the driver to my server’s ethernet port, so that was the IP address I used.

2. Fix data folder ownership

I kept getting a “can’t access directory” error, so I fixed up who owned the data folder:

$ cd /media/www/nextcloud
$ sudo chown www-data:www-data data
$ sudo chmod o-rx data

3. Configure temporary IP:Port access with Caddy and UFW

I found that Nextcloud on Docker doesn’t let you change the web root folder during its installation process, so I begrudgingly updated /etc/caddy/Caddyfile to temporary the site on 10.0.0.171:8283:

hostname.local, 10.0.0.171 {
  tls /hostname.local.cert.pem /hostname.local.key.pem
  encode zstd gzip

  # Handle subfolders
  # Here's portainer
  handle_path /docker/* {
    reverse_proxy https://localhost:9443 {
      transport http {
        tls_insecure_skip_verify
      }
    }
  }

  # Otherwise, host PHP
  root * /var/www/php/
  php_fastcgi unix//run/php/php8.1-fpm.sock
  file_server
}

# Make 10.0.0.171:8283 reverse proxy to Nextcloud
hostname.local:8283, 10.0.0.171:8283 {
  tls /hostname.local.cert.pem /hostname.local.key.pem
  encode zstd gzip
  reverse_proxy localhost:8383
}

Well, actually, since I was editing the Caddyfile a lot, I made a quick git repo on my home folder, copied and committed the file above as backup, then made a ZSH script to copy its content and restart the service:

#!/bin/zsh
cat Caddyfile | sudo tee /etc/caddy/Caddyfile > /dev/null
sudo systemctl reload caddy

Run the script above, then edit the firewall to accept port 8283:

$ sudo ufw allow 8283/tcp

4. Install Nextcloud

So I first re-wrote the docker-compose.yml file, since the links property didn’t really look like it was doing anything:

version: '3.8'

volumes:
  nextcloud:
  db:

services:
  db:
    image: mariadb:10.6
    restart: always
    ports:
      - 3306:3306
    command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
    volumes:
      - db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=pass1
      - MYSQL_PASSWORD=pass2
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud

  app:
    image: nextcloud
    restart: always
    ports:
      - 8383:80
    depends_on: # Changes this line from links to depend_on
      - db
    volumes:
      - /media/www/nextcloud/data:/var/www/html/data
      - nextcloud:/var/www/html
    environment:
      - MYSQL_PASSWORD=pass2
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud
      - MYSQL_HOST=10.0.0.171:3306 # Also fix Mariadb host

Then tell Portainer to update the stack (or run docker compose up, same thing.)

With the Caddy configuration from Step 3, it should now be possible to access10.0.0.171:8283 in the web-browser. Run the install process with a new admin account accordingly. I found that, at the end of the install process, Nextcloud will redirect back to 10.0.0.171, which was annoying, but the next steps will fix that.

5. Create a custom config.php file

So Nextcloud appears to allow custom configuration files in the /var/www/html/config folder, so long as the filename ends with .config.php. So I made this quick /media/www/nextcloud/config/hostname.config.php file below:

<?php

$CONFIG = [

/**
 * Your list of trusted domains that users can log into.
 */
'trusted_domains' =>
   [
    'hostname.local',
    '10.0.0.171'
  ],

/**
 * Where user files are stored.
 */
'datadirectory' => '/var/www/html/data',

/**
 * Proxy Configurations
 */

/**
 * The automatic hostname detection of Nextcloud can fail in certain reverse
 * proxy and CLI/cron situations. This option allows you to manually override
 * the automatic detection.
 */
'overwritehost' => 'hostname.local',

/**
 * When generating URLs, Nextcloud attempts to detect whether the server is
 * accessed via ``https`` or ``http``. However, if Nextcloud is behind a proxy
 * and the proxy handles the ``https`` calls, Nextcloud would not know that
 * ``ssl`` is in use, which would result in incorrect URLs being generated.
 * Valid values are ``http`` and ``https``.
 */
'overwriteprotocol' => 'https',

/**
 * Nextcloud attempts to detect the webroot for generating URLs automatically.
 * For example, if ``www.example.com/nextcloud`` is the URL pointing to the
 * Nextcloud instance, the webroot is ``/nextcloud``. When proxies are in use,
 * it may be difficult for Nextcloud to detect this parameter, resulting in
 * invalid URLs.
 */
'overwritewebroot' => '/cloud',

/**
 * Use this configuration parameter to specify the base URL for any URLs which
 * are generated within Nextcloud using any kind of command line tools (cron or
 * occ). The value should contain the full base URL:
 * ``https://www.example.com/nextcloud``
 *
 * Defaults to ``''`` (empty string)
 */
'overwrite.cli.url' => 'https://hostname.local/cloud',

/**
 * To have clean URLs without `/index.php` this parameter needs to be configured.
 */
'htaccess.RewriteBase' => '/cloud',
];

With the file made, I updated docker-compose.yml to mount the new config file, and let Portainer restart the stack again (docker compose up):

version: '3.8'

volumes:
  nextcloud:
  db:

services:
  db:
    image: mariadb:10.6
    restart: always
    ports:
      - 3306:3306
    command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
    volumes:
      - db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=pass1
      - MYSQL_PASSWORD=pass2
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud

  app:
    image: nextcloud
    restart: always
    ports:
      - 8383:80
    depends_on:
      - db
    volumes:
      - /media/www/nextcloud/data:/var/www/html/data
# Mount config file as read-only
      - /media/www/nextcloud/config/hostname.config.php:/var/www/html/config/omiyagames.config.php:ro
      - nextcloud:/var/www/html
    environment:
      - MYSQL_PASSWORD=pass2
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud
      - MYSQL_HOST=10.0.0.171:3306

6. Update Caddy to host Nextcloud from /cloud

Lastly, update /etc/caddy/Caddyfile:

hostname.local, 10.0.0.171 {
  tls /hostname.local.cert.pem /hostname.local.key.pem
  encode zstd gzip

  # Handle subfolders
  # Here's portainer
  handle_path /docker/* {
    reverse_proxy https://localhost:9443 {
      transport http {
        tls_insecure_skip_verify
      }
    }
  }
  # Here's Nextcloud (needs to use handle_path like Portainer)
  handle_path /cloud* {
    reverse_proxy localhost:8383
  }

  # Otherwise, host PHP
  root * /var/www/php/
  php_fastcgi unix//run/php/php8.1-fpm.sock
  file_server
}

Also, clean-up the firewall to remove port 8283:

$ sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
# ... snip ...
[21] 8283/tcp                   ALLOW IN    Anywhere
# ... snip ...
[38] 8283/tcp (v6)              ALLOW IN    Anywhere (v6)

$ sudo ufw delete 38
Deleting:
 allow 8283/tcp
Proceed with operation (y|n)? y
Rule deleted (v6)

$ sudo ufw delete 21
Deleting:
 allow 8283/tcp
Proceed with operation (y|n)? y
Rule deleted

Now open hostname.local/cloud, and login normally.

Thats fair. But as long as you can deactivate DHCP in router, then “fack 'em”. :slight_smile: Setup your own DHCP and DNS on your own LAN. Then let them intercept DNS (port 53) for everything that leaves your LAN. As long as the DHCP server that responds on your LAN is your own, they can screw their customers as much as they like. Just with until domain resolution over HTTPS is standard over the old school DNS, and they are no longer able to override anything. All it requires is a DHCP and DNS server on your own on same network as your own equipment.