Unable to get Talk video calls working, external TURN server, Cloudflare, nginx reverse proxy

Some or all of the below information will be requested if it isn’t supplied; for fastest response please provide as much as you can. :heart:

The Basics

  • Nextcloud Server version (e.g., 29.x.x):
    • 31.0.5
  • Operating system and version (e.g., Ubuntu 24.04):
    • Ubuntu 24.04.2
  • Web server and version (e.g, Apache 2.4.25):
    • 2.4.63
  • Reverse proxy and version _(e.g. nginx 1.27.2)
    • nginx 1.26.3 (linuxserver.io swag container)
  • PHP version (e.g, 8.3):
    • 8.3.21
  • Is this the first time you’ve seen this error? (Yes / No):
    • yes
  • When did this problem seem to first start?
    • since install
  • Installation method (e.g. AlO, NCP, Bare Metal/Archive, etc.)
    • AIO
  • Are you using CloudfIare, mod_security, or similar? (Yes / No)
    • Cloudflare for DNS hosting, proxying turned off

Summary of the issue you are facing:

Cross-posting my discussion from github:

Will try to provide as much info upfront as possible, I’ve searched, followed the suggestions in the Notes on Cloudflare docs, and for the life of me I cannot get this to work!

domain redacted to DOMAIN.COM

Host OS: Ubuntu 24.04 (VM on Proxmox 8.4.1)
mastercontainer compose file:

---
services:
  nextcloud-aio-mastercontainer:
    image: nextcloud/all-in-one:latest
    init: true
    restart: unless-stopped
    container_name: nextcloud-aio-mastercontainer # do not change
    volumes:
      - nextcloud_aio_mastercontainer:/mnt/docker-aio-config
      - /var/run/docker.sock:/var/run/docker.sock:ro
    ports:
      - 8080:8080
    env_file:
      - .env
    environment:
      - APACHE_PORT=32323
      - APACHE_IP_BINDING=0.0.0.0
      - NEXTCLOUD_DATADIR=/opt/docker/nextcloud
volumes:
  nextcloud_aio_mastercontainer:
    name: nextcloud_aio_mastercontainer

nginx server block:

# NEXTCLOUD
server {
	listen 80;
	listen [::]:80;

	if ($scheme = "http") {
		return 301 https://$host$request_uri;
	}

	listen 443 ssl;
	listen [::]:443 ssl;
	http2 on;

	server_name cloud.DOMAIN.COM;

	include /config/nginx/ssl.conf;

	location / {
		proxy_pass http://192.168.1.33:32323$request_uri;

		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Port $server_port;
		proxy_set_header X-Forwarded-Scheme $scheme;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header Accept-Encoding "";
                proxy_set_header Host $host;

                client_body_buffer_size 512k;
                proxy_read_timeout 86400s;
                client_max_body_size 0;

                # Websocket
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection $connection_upgrade;
	}

	location /.well-known/carddav {
	    return 301 $scheme://$host/remote.php/dav;
	}

	location /.well-known/caldav {
	    return 301 $scheme://$host/remote.php/dav;
	}

	location ^~ /.well-known {
	    return 301 $scheme://$host/index.php$uri;
	}
}

TURN server (coturn) config:

# TURN server realm and name
realm=turn.DOMAIN.COM
server-name=turn.DOMAIN.COM

# IPs the TURN server listens to
# listening-ip=0.0.0.0

# External IP Address of the TURN server
external-ip=<public IP>

# Main listening port
listening-port=3478
# TLS listening port
tls-listening-port=5349

# Other ports for TURNing
min-port=10000
max-port=20000

# User fingerprint in TURN message
fingerprint

# Log file path
log-file=/var/log/coturn/turnserver.log

# Enable verbose logging
verbose=4

# Auth
use-auth-secret
static-auth-secret=<REDACTED>

# user=test:testpass123
# lt-cred-mech

# From https://nextcloud-talk.readthedocs.io/en/latest/coturn/#3-configure-turnserverconf-for-usage-with-nextcloud-talk
total-quota=0
bps-capacity=0
stale-nonce
# no-multicast-peers

# SSL Certs
cert=/etc/certs/fullchain.pem
pkey=/etc/certs/privkey.pem

# Force TLS v1.3
no-sslv3
no-tlsv1
no-tlsv1_1
no-tlsv1_2

Logs:

From nextcloud-talk container when attempting a call:

[ERR] [plugins/janus_videoroom.c:janus_videoroom_handler:10176] No such feed (1)
mcu_janus_subscriber.go:222: Publisher uolD09p4ZFr3vvYx9WnE2-5YGvjZnWGU4o3NfifUGIZ8PT1nSWpTQkg0RHM2aWYzby1jaHFGYnE4WUZNOFVLdlhiLWZlR0stZWFnQlZXa0MxVWtxNjB4cUNXMFpBfDAzNDc5Njk0NzE= not sending yet for video, wait and retry to join room 8297175936015350 as subscriber
mcu_janus_client.go:165: Started listener &{{janus.plugin.videoroom map[room:2325819388190737 started:ok videoroom:event]} map[] 483524709210561 5813768219239170}
[WARN] [6690154350066132] ICE failed for component 1 in stream 1, but let's give it some time... (trickle pending, answer received, alert not set)
[WARN] [5813768219239170] ICE failed for component 1 in stream 1, but let's give it some time... (trickle pending, answer received, alert not set)
hub.go:2678: Could not send MCU message &{Type:requestoffer Sid: RoomType:video Payload:map[] Bitrate:0 AudioCodec: VideoCodec: VP9Profile: H264Profile: offerSdp:<nil> answerSdp:<nil>} for session KGgffBvoUVaUTxll0xDFth1hBTKgaF1fYJUaub09tdx8SGk3SnQ5YlI4cEhHVFFEeXI2Zy1FTTZkWkQ4MkFZSDB3cmlXNVlhdE9vZUstSTUwalpvZnVPWFN8NjI0Nzk2OTQ3MQ== to uolD09p4ZFr3vvYx9WnE2-5YGvjZnWGU4o3NfifUGIZ8PT1nSWpTQkg0RHM2aWYzby1jaHFGYnE4WUZNOFVLdlhiLWZlR0stZWFnQlZXa0MxVWtxNjB4cUNXMFpBfDAzNDc5Njk0NzE=: context deadline exceeded
[ERR] [plugins/janus_videoroom.c:janus_videoroom_handler:10176] No such feed (1)
mcu_janus_subscriber.go:222: Publisher uolD09p4ZFr3vvYx9WnE2-5YGvjZnWGU4o3NfifUGIZ8PT1nSWpTQkg0RHM2aWYzby1jaHFGYnE4WUZNOFVLdlhiLWZlR0stZWFnQlZXa0MxVWtxNjB4cUNXMFpBfDAzNDc5Njk0NzE= not sending yet for video, wait and retry to join room 8297175936015350 as subscriber
[WARN] [6690154350066132] ICE failed for component 1 in stream 1, but we're still waiting for some info so we don't care... (trickle pending, answer received, alert not set)
[WARN] [5813768219239170] ICE failed for component 1 in stream 1, but we're still waiting for some info so we don't care... (trickle pending, answer received, alert not set)
[WARN] [6690154350066132] ICE failed for component 1 in stream 1, but we're still waiting for some info so we don't care... (trickle pending, answer received, alert not set)
[WARN] [5813768219239170] ICE failed for component 1 in stream 1, but we're still waiting for some info so we don't care... (trickle pending, answer received, alert not set)
[WARN] [6690154350066132] ICE failed for component 1 in stream 1, but we're still waiting for some info so we don't care... (trickle pending, answer received, alert not set)
[WARN] [5813768219239170] ICE failed for component 1 in stream 1, but we're still waiting for some info so we don't care... (trickle pending, answer received, alert not set)

The could not send MCU message ... context deadline exceeded stands out to me.

Calls where both users are on the same LAN as the NC server do work (and I do see they’re using the TURN server). But when I have 1 user in LAN and one in WAN, they never connect, just endlessly tries to connect and reconnect.

For the Cloudflare side, I don’t use the Tunnel and I have Proxying turned off for both the turn.subdomain and cloud.subdomain.

I can also confirm that the TURN server works with Matrix/Element for video calls with 1 user in LAN and 1 in WAN (and that’s still behind the CF proxy)

This feels like an issue with my reverse proxy and Apache, but I can’t find any logs indicating any issue.

NC can resolve the HBP just fine, running curl -vvv https://$NC_DOMAIN:443/standalone-signaling/api/v1/welcome correctly opens a connection to my domain, connects thru my nginx machine.

UFW is disabled on all machines involved.

Hi, I wonder why did you set up your own turn server? AIO comes with a turn server included. You only need to open port 3478 tcp and udp on the server an then it should usually work out of the box…

1 Like

My server is hosted locally at my house, so it’s behind a NAT. I had always heard that setting up a TURN server behind a NAT never works (seem to remember that from Matrix Synapse), but I can try enabling the local AIO one and I assume portforward 3478 on tcp and udp to it thru my router and see what behavior I get!

Got it working!!

Narrowed down the issue to some of the defaults OpnSense uses for portforwarding, mainly NAT reflection. In Firewall > Settings > Advanced, I enabled “Reflection for port forwards” and “Automatic outbound NAT for Reflection” and everything is working, even with Cloudflare proxy enabled!

Still curious why the external coturn server wouldn’t work with NC but would with Matrix, but that’s mostly irrelevant now.

Hopefully the hint about NAT reflection in OpnSense can help someone else out in the future.

slight correction, cloud.DOMAIN.COM needs CF proxy turned off still

This topic was automatically closed 8 days after the last reply. New replies are no longer allowed.