Collabora/Nextcloud dockerized http requests blocked by iptables, behind reverse proxy

Hi,

I have a dockerized reverse proxy nginx sever sitting in front of two Nextcloud and Collabora. Both Nextcloud ans Collabora are in separate containers as well.

I use docker-compose file example from https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/with-nginx-proxy/mariadb-cron-redis/fpm , so I have two different networks :

  • on frontend network, I have reverse proxy + collabora
  • on backend network, I have nextcloud + DB
  • nextcloud nginx server is sitting on both network, so it can pass requests from reverse to nextcloud

Here is a schema of my current setup :

Everything related to Nextcloud itself works fine. Problems arise when I want to make Collabora and Nextcloud communicate. Iptables drops all the packets that are sent between Collabora or Nextcloud and reverse proxy.

I log every iptables dropped packets. Following are examples of dropped packets in different scenarios.

During test connexion from Nextcloud’s collabora application panel :

Nextcloud container (172.18.0.5) tries to reach external vps ip adress = dropped :

IPTables-Dropped: IN=br-network-backend-id OUT= PHYSIN=veth-some-id MAC=02:42... SRC=172.18.0.5 DST=my-vps-ip4-adress LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=7184 DF PROTO=TCP SPT=42554 DPT=80 WINDOW=64240 RES=0x00 SYN URGP=0 

Let’s say I allow this connection in iptables. Then iptables blocks a connexion between frontend gateway (172.19.0.1) and reverse proxy server (172.19.0.2) :

IPTables-Dropped: IN= OUT=br-network-frontend-id SRC=172.19.0.1 DST=172.19.0.2 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=23334 DF PROTO=TCP SPT=35114 DPT=80 WINDOW=64240 RES=0x00 SYN URGP=0

I then again allow this connexion, and test connexion works fine.

Opening a ods document from Nextcloud

Collabora container (172.19.0.3) tries to reach external vps ip adress = dropped :

IPTables-Dropped: IN=br-network-frontend-id OUT= PHYSIN=veth-some-id MAC=02:42.... SRC=172.19.0.3 DST=my-vps-ip4-adress LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=19007 DF PROTO=TCP SPT=43798 DPT=443 WINDOW=64240 RES=0x00 SYN URGP=0

From my point of view, it looks like both Nextcloud and Collabora are not using Proxy reverse nginx, and try instead to directly reach for VPS external ip adress, which is not allowed by Docker iptables custom rules.

I can of course add iptables custom rules for each of the dropped packes, but I feel like this is wrong, and should work otherwise. I don’t want to have to deal with custom rules. I want Collabora and Nexcloud to be able to communicate between each others behind a Proxy reverse server.

I’m certainly doing something wrong. Can someone give me a hint please ? Any help very appreciated :slight_smile:


Here are my config files :

docker-compose.yaml :

version: '3'    
    
services:    
  proxy:
    build: ./proxy
    restart: always
    ports:
      - 80:80
      - 443:443
    labels:
      com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
    volumes:
      - certs:/etc/nginx/certs:ro
      - ./proxy/vhost.d:/etc/nginx/vhost.d:ro
      - html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro
    networks:
      - frontend

  nextcloud:    
    image: nextcloud:19.0.1-fpm    
    restart: always    
    volumes:    
      - code:/var/www/html    
      - data:/var/www/html/data    
    env_file:    
      - .env    
    depends_on:    
      - db
      - redis    
    networks:
      - backend
    
  webserver:    
    build: ./web    
    restart: always    
    volumes:    
      - code:/var/www/html:ro    
    environment:    
      - VIRTUAL_HOST=cloud.domain.tld
      - LETSENCRYPT_HOST=cloud.domain.tld
    depends_on:
      - app
    networks:
      - frontend
      - backend

  collabora:
    image: collabora/code
    restart: always
    expose:
      - 9980
    cap_add:
      - MKNOD
    environment:
      - "domain=cloud\\.domain\\.tld"
      - "VIRTUAL_HOST=office.domain.tld"
      - "VIRTUAL_PORT=9980"
      - "LETSENCRYPT_HOST=office.doamin.tld"
    networks:
      - frontend
    volumes:
      - office:/etc/loolwsd # to be able to edit loolwsd.xml file more easily

  letsencrypt-companion:
    image: jrcs/letsencrypt-nginx-proxy-companion
    restart: always
    volumes:
      - certs:/etc/nginx/certs:rw
      - ./proxy/vhost.d:/etc/nginx/vhost.d:rw
      - html:/usr/share/nginx/html
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - frontend
    depends_on:
      - proxy

  db:    
    image: mariadb    
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW    
    restart: always    
    volumes:    
      - db:/var/lib/mysql    
    env_file:    
      - .env    
    networks:
      - backend
    
  redis:    
    image: redis:alpine    
    restart: always    
    networks:
      - backend

  cron:
    image: nextcloud:19.0.1-fpm
    restart: always
    volumes:
      - code:/var/www/html
      - data:/var/www/html/data    
    entrypoint: /cron.sh
    depends_on:
      - db
      - redis
    networks:
      - backend

volumes:
  db:
  code:
  data:
  certs:
  html:

networks:
  frontend:
  backend:

Tree of working dir :

.
├── docker-compose.yml
├── office
│   └── loolwsd.xml
├── proxy
│   ├── Dockerfile
│   ├── limit_req.conf
│   ├── uploadsize.conf
│   └── vhost.d
│       ├── default
│       └── office.domain.tld
└── webserver
    ├── Dockerfile
    └── nginx.conf

office.domain.tld nginx vhost file :

## Start of configuration add by letsencrypt container    
location ^~ /.well-known/acme-challenge/ {    
    auth_basic off;    
    auth_request off;    
    allow all;    
    root /usr/share/nginx/html;    
    try_files $uri =404;    
    break;    
}    
## End of configuration add by letsencrypt container    
# static files    
location ^~ /loleaflet {    
    proxy_pass http://office.domain.tld;    
    proxy_set_header Host $http_host;    
}    
    
# WOPI discovery URL    
location ^~ /hosting/discovery {    
    proxy_pass http://office.domain.tld;    
    proxy_set_header Host $http_host;    
}    
    
# Capabilities    
location ^~ /hosting/capabilities {    
    proxy_pass http://office.domain.tld;    
    proxy_set_header Host $http_host;    
}    
    
# main websocket    
location ~ ^/lool/(.*)/ws$ {    
    proxy_pass http://office.domain.tld;    
    proxy_set_header Upgrade $http_upgrade;    
    proxy_set_header Connection "Upgrade";    
    proxy_set_header Host $http_host;    
    proxy_read_timeout 36000s;    
}    
# download, presentation and image upload    
location ~ ^/lool {    
    proxy_pass http://office.domain.tld;
    proxy_set_header Host $http_host;
}

# Admin Console websocket
location ^~ /lool/adminws {
    proxy_pass http://office.domain.tld;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
    proxy_set_header Host $http_host;
    proxy_read_timeout 36000s;
}

Nginx for Nextcloud conf file is same as here : https://github.com/nextcloud/docker/blob/master/.examples/docker-compose/with-nginx-proxy/mariadb-cron-redis/fpm/web/nginx.conf

Reverse proxy uploadsize.conf :

client_max_body_size 10G;
proxy_request_buffering off; 

Reverse proxy limit_req.conf :

limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;

loolwsd.xml (relevant parts) :

<server_name desc="External hostname:port of the server running loolwsd. If empty, it's derived from the request (please set it if this doesn't work). Must be specified when behind a reverse-proxy or when the hostname is not reachable directly." type="string" default="">office.domain.tld</server_name>

<file_server_root_path desc="Path to the directory that should be considered root for the file server. This should be the directory containing loleaflet." type="path" relative="true" default="loleaflet/../"></file_server_root_path>

<net desc="Network settings">
      <!-- On systems where localhost resolves to IPv6 [::1] address first, when net.proto is all and net.listen is loopback, loolwsd unexpectedly listens on [::1] only.
           You need to change net.proto to IPv4, if you want to use 127.0.0.1. -->
      <proto type="string" default="all" desc="Protocol to use IPv4, IPv6 or all for both">all</proto>
      <listen type="string" default="any" desc="Listen address that loolwsd binds to. Can be 'any' or 'loopback'.">any</listen>
      <service_root type="path" default="" desc="Prefix all the pages, websockets, etc. with this path."></service_root>
      <proxy_prefix type="bool" default="false" desc="Enable a ProxyPrefix to be passed int through which to redirect requests"></proxy_prefix>
      <post_allow desc="Allow/deny client IP address for POST(REST)." allow="true">
        <host desc="The IPv4 private 192.168 block as plain IPv4 dotted decimal addresses.">192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host>
        <host desc="Ditto, but as IPv4-mapped IPv6 addresses">::ffff:192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host>
        <host desc="The IPv4 loopback (localhost) address.">127\.0\.0\.1</host>
        <host desc="Ditto, but as IPv4-mapped IPv6 address">::ffff:127\.0\.0\.1</host>
        <host desc="The IPv6 loopback (localhost) address.">::1</host>
        <host desc="The IPv4 private 172.17.0.0/16 subnet (Docker).">172\.17\.[0-9]{1,3}\.[0-9]{1,3}</host>
        <host desc="Ditto, but as IPv4-mapped IPv6 addresses">::ffff:172\.17\.[0-9]{1,3}\.[0-9]{1,3}</host>
      </post_allow>
      <frame_ancestors desc="Specify who is allowed to embed the LO Online iframe (loolwsd and WOPI host are always allowed). Separate multiple hosts by space."></frame_ancestors>
</net>

<ssl desc="SSL settings">
        <enable type="bool" desc="Controls whether SSL encryption between browser and loolwsd is enabled (do not disable for production deployment). If default is false, must first be compiled with SSL support to enable." default="true">false</enable>
        <termination desc="Connection via proxy where loolwsd acts as working via https, but actually uses http." type="bool" default="true">true</termination>
        <cert_file_path desc="Path to the cert file" relative="false">/etc/loolwsd/cert.pem</cert_file_path>
        <key_file_path desc="Path to the key file" relative="false">/etc/loolwsd/key.pem</key_file_path>
        <ca_file_path desc="Path to the ca file" relative="false">/etc/loolwsd/ca-chain.cert.pem</ca_file_path>
        <cipher_list desc="List of OpenSSL ciphers to accept" default="ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH"></cipher_list>
        <hpkp desc="Enable HTTP Public key pinning" enable="false" report_only="false">
            <max_age desc="HPKP's max-age directive - time in seconds browser should remember the pins" enable="true">1000</max_age>
            <report_uri desc="HPKP's report-uri directive - pin validation failure are reported at this URL" enable="false"></report_uri>
            <pins desc="Base64 encoded SPKI fingerprints of keys to be pinned">
            <pin></pin>
            </pins>
        </hpkp>
</ssl>

<storage desc="Backend storage">
        <filesystem allow="false" />
        <wopi desc="Allow/deny wopi storage. Mutually exclusive with webdav." allow="true">
            <host desc="Regex pattern of hostname to allow or deny." allow="true">cloud.domain.tld</host>
            <host desc="Regex pattern of hostname to allow or deny." allow="true">10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host>
            <host desc="Regex pattern of hostname to allow or deny." allow="true">172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host>
            <host desc="Regex pattern of hostname to allow or deny." allow="true">172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host>
            <host desc="Regex pattern of hostname to allow or deny." allow="true">172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host>
            <host desc="Regex pattern of hostname to allow or deny." allow="true">192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host>
            <host desc="Regex pattern of hostname to allow or deny." allow="false">192\.168\.1\.1</host>
            <max_file_size desc="Maximum document size in bytes to load. 0 for unlimited." type="uint">0</max_file_size>
            <reuse_cookies desc="When enabled, cookies from the browser will be captured and set on WOPI requests." type="bool" default="false">false</reuse_cookies>
            <locking desc="Locking settings">
                <refresh desc="How frequently we should re-acquire a lock with the storage server, in seconds (default 15 mins) or 0 for no refresh" type="int" default="900">900</refresh>
            </locking>
        </wopi>
        <webdav desc="Allow/deny webdav storage. Mutually exclusive with wopi." allow="false">
            <host desc="Hostname to allow" allow="false">cloud.domain.tld</host>
        </webdav>
        <ssl desc="SSL settings">
            <as_scheme type="bool" default="true" desc="When set we exclusively use the WOPI URI's scheme to enable SSL for storage">true</as_scheme>
            <enable type="bool" desc="If as_scheme is false or not set, this can be set to force SSL encryption between storage and loolwsd. When empty this defaults to following the ssl.enable setting"></enable>
            <cert_file_path desc="Path to the cert file" relative="false"></cert_file_path>
            <key_file_path desc="Path to the key file" relative="false"></key_file_path>
            <ca_file_path desc="Path to the ca file. If this is not empty, then SSL verification will be strict, otherwise cert of storage (WOPI-like host) will not be verified." relative="false"></ca_file_path>
            <cipher_list desc="List of OpenSSL ciphers to accept. If empty the defaults are used. These can be overriden only if absolutely needed."></cipher_list>
        </ssl>
</storage>

Docker container networking runs through a sort of virtual interface and can be subject to your iptables rules even for traffic that doesn’t leave the host.

What you may have here is a DNS issue. Is this VPS IP returned on a lookup if the address? If so, you could use the extra_hosts directive in Docker to override it with the local IP you want it to use (adds hosts file entry within a container).

this should point to 172.19.0.3 (or the name of the collabora container.) or?

Thank you for your response and sorry for late feedback.

Good point. I don’t know if the ip is returner on a dns lookup (I guess so), but I tried to add extra_hosts directive to all my containers, especially Nextcloud and Collabora, for example :

extra_hosts:
    "cloud.domain.tld:11.22.33.44"
    "office.domain.tld:11.22.33.44"

but it does not change anything. Iptables is still dropping my packets.

Thank you for that suggestion. I tried with 172.19.0.3 and 172.19.0.3:9980, but still no luck. Same goes with the name of the container.

Any other ideas ? :slight_smile:

For information, here is my iptables rules : if you see anything that could cause this behavior, let me know :

# drop default    
iptables -P INPUT DROP    
iptables -P OUTPUT DROP    
iptables -P FORWARD DROP    
    
# loopback    
iptables -A OUTPUT -o lo -j ACCEPT    
iptables -A INPUT -i lo -j ACCEPT    
    
# SSH    
iptables -A OUTPUT -o <interfacename> -p tcp --sport <randomport> -j ACCEPT    
iptables -A INPUT -i <interfacename> -p tcp --dport <randomport> -j ACCEPT    
    
# authorize local origin connection with outside    
iptables -A INPUT  -i <interfacename> -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT    
iptables -A OUTPUT -o <interfacename> -m conntrack ! --ctstate INVALID -j ACCEPT     

And here is the reverse proxy nginx default.conf file generated by jwilder/nginx-proxy container that I use :

# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}

# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
  default $http_x_forwarded_port;
  ''      $server_port;
}

# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}

# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
  default off;
  https on;
}

gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
access_log off;
                ssl_protocols TLSv1.2 TLSv1.3;
                ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA
20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
                ssl_prefer_server_ciphers off;
resolver 127.0.0.11;

# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;

# Mitigate httpoxy attack (see README for details)

proxy_set_header Proxy "";

server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        listen 80;
        access_log /var/log/nginx/access.log vhost;
        return 503;
}

server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        listen 443 ssl http2;
        access_log /var/log/nginx/access.log vhost;
        return 503;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/default.crt;
        ssl_certificate_key /etc/nginx/certs/default.key;
}

# cloud.domain.tld
upstream cloud.domain.tld {
                                # Cannot connect to network of this container
                                server 127.0.0.1 down;
                                ## Can be connected with "proxy-front" network
                        # web_1
                        server 172.19.0.5:80;
}
server {
        server_name cloud.domain.tld;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        # Do not HTTPS redirect Let'sEncrypt ACME challenge
        location /.well-known/acme-challenge/ {
                auth_basic off;
                allow all;
                root /usr/share/nginx/html;
                try_files $uri =404;
                break;
        }
        location / {
                return 301 https://$host$request_uri;
        }
}

server {
        server_name cloud.domain.tld;
        listen 443 ssl http2 ;
        access_log /var/log/nginx/access.log vhost;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/cloud.domain.tld.crt;
        ssl_certificate_key /etc/nginx/certs/cloud.domain.tld.key;
        ssl_dhparam /etc/nginx/certs/cloud.domain.tld.dhparam.pem;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/nginx/certs/cloud.domain.tld.chain.pem;
        add_header Strict-Transport-Security "max-age=31536000" always;
        include /etc/nginx/vhost.d/default;
        location / {
                proxy_pass http://cloud.domain.tld;
        }
}

# office.domain.tld
upstream office.domain.tld {
                        ## Can be connected with "proxy-front" network
                        # collabora
                        server 172.19.0.3:9980;
}
server {
        server_name office.domain.tld;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        # Do not HTTPS redirect Let'sEncrypt ACME challenge
        location /.well-known/acme-challenge/ {
                auth_basic off;
                allow all;
                root /usr/share/nginx/html;
                try_files $uri =404;
                break;
        }
        location / {
                return 301 https://$host$request_uri;
        }
}

server {
        server_name office.domain.tld;
        listen 443 ssl http2 ;
        access_log /var/log/nginx/access.log vhost;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/office.domain.tld.crt;
        ssl_certificate_key /etc/nginx/certs/office.domain.tld.key;
        ssl_dhparam /etc/nginx/certs/office.domain.tld.dhparam.pem;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/nginx/certs/office.domain.tld.chain.pem;
        add_header Strict-Transport-Security "max-age=31536000" always;
        include /etc/nginx/vhost.d/office.domain.tld;
        location / {
                proxy_pass http://office.domain.tld;
        }
}

No guessing with DNS. Either it resolves to the private or public address, or something else or not at all. If your problem is the containers trying to reach the public IP, then something is misconfigured either in DNS or in the software.

Also, just to confirm, you put the local IP of the reverse proxy in the extra_hosts, correct?

@armymen do you need to put collabora in the frontend network? do you need a extra domain for collabora?

this playbook will setup nextcloud in docker + collabora:

instead of the nginx proxy i use traefik (handle tls out-of-the-box) but that doesn’t matter.

i run the collabora container on the backend network as well. not on frontend.

the collabora url in nextcloud is set like this:

shell: '{{ docker_occ_cmd }} config:app:set richdocuments wopi_url --value https://{{ nextcloud_server_fqdn }}:443'

as you can see it’s the nextcloud fqdn. there is not need for an office/collabora fqdn.

the nginx web server (not the proxy) handle all traffik to collabora in this block

the fancy nginx.headers for traefik are defined here:

you could run the playbook in a vserver and compare the settings with yours.

1 Like

Well, I have not enough knowledge to know when a external DNS lookup is made, and when it’s not. Do you have any tools to recommend ? Should i use things like tcpdump ? I see here that I could use a combination of tcpdump + wireshark : https://unix.stackexchange.com/questions/43716/how-to-log-all-my-dns-queries/43726
I’ll give it a try.

Anyway, this (partially) solved my problem :fireworks: ! :

Indeed, I was putting in my extra_hosts directives the external IP of the VPS, not the local IP of the reverse proxy. I put the local ones, and I do not have anymore iptables blocking communication between Nextcloud / Collabora. Hurray and thank you ! :slight_smile:

The connections that are not blocked anymore are the ones after the initialization setup : when opening, editing and saving a document. Which is great and the most important.


However, I still have iptables blocking connection during initialization setup. It looks like DOCKER-ISOLATION-STAGE-2 rules are blocking direct communication between my two networks :

Chain DOCKER-ISOLATION-STAGE-2 (3 references)
  105  6300 DROP       all  --  any    <br-id-of-frontend-network> anywhere             anywhere 

Which is fine and expected. I don’t undestand why do I have “cross-network calls” however, because my nginx proxy logs don’t show anything like this when making this test connection. Here are the logs at that very moment :

office.domain.tld 172.19.0.1 - - [26/Aug/2020:20:23:32 +0000] "GET /hosting/discovery HTTP/1.1" 301 169 "-" "Nextcloud Server Crawler"                          
office.domain.tld 172.19.0.1 - - [26/Aug/2020:20:23:32 +0000] "GET /hosting/discovery HTTP/1.1" 200 18222 "-" "Nextcloud Server Crawler"                        
office.domain.tld 172.19.0.1 - - [26/Aug/2020:20:23:32 +0000] "GET /hosting/capabilities HTTP/1.1" 301 169 "-" "Nextcloud Server Crawler"                       
office.domain.tld 172.19.0.1 - - [26/Aug/2020:20:23:32 +0000] "GET /hosting/capabilities HTTP/1.1" 200 241 "-" "Nextcloud Server Crawler"                       
cloud.domain.tld <my-home-ip> - - [26/Aug/2020:20:23:32 +0000] "POST /index.php/apps/richdocuments/ajax/admin.php HTTP/2.0" 200 47 "-" "Mozilla/5.0 (X11; Ubuntu;
 Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" 

I had to allow cross-network connections from iptable to make it work, which I don’t like : it transgress the isolation principle. Also, do you have an idea how I could avoid this ?

This is the rules I had to add :

iptables -I DOCKER-USER -i br-########1 -o br-########2 -j ACCEPT
iptables -I DOCKER-USER -i br-########2 -o br-########1 -j ACCEPT

taken from https://stackoverflow.com/a/51373066/10767428


I guess not. I don’t exactly remembre why I put it in the frontend network, but, sure, putting it in the backend one sounds better, since it has nothing to do with proxy functionnalities. I’ll give it a try. Plus, it could solve the last problem I mention above.

Well, I did setup this extra domain because collabora official docs recommend to do so, and everybody on the Internet seemed to do it that way. But indeed, if I intend to access Collabora container onnly from Nextcloud (which I do, for now), I don’t see the use of having an extra domain name. And it could certainly help with DNS lookup issues.

Thank you for sharing all your settings and your playbook. First, I will try to change my settings to match yours, i.e. :

  • remove the collabora domain name, and use container name in place
  • put collabora container in backend network,
  • configure regular nginx server to do the actual proxying to collabora

Is it the correct general idea of what you propose ?

If it does not work, I’ll try the complete playbook in a separate VM and compare settings, as you suggests.

Thanks for help

yes.

that should give you running reference system.

and than you could apply your ip table rules to see if that breaks it.

You need it because unless I’m mistaken the client will make a direct connection to Collabora. It needs to be proxied as well.

If you have containers in two different Docker networks that need to communicate, they have to do so by connecting to exposed ports on the host. You shouldn’t allow direct communication between Docker networks.