Accessing nextcloud via local IP and external domain

I’m intending to run an Nginx reverse proxy that points to my nextcloud server. The Nginx server is only listening on port 443 (firewall port 80 is blocked). It has a valid SSL cert. So all traffic is https.

Nextcloud is installed as a snap on Ubuntu 18.04.

The problem:

I managed to get it working, but not without difficulty. I guess that’s how I learn. The problem is, I can no longer access nextcloud via the local IP. When I navigate to the local IP, it redirects to the FQDN. I believe this is due to some overwrite settings, but I also think those settings were necessary to get nextcloud working with nginx. Its working at the moment with NAT reflection settings, but it seems very unnecessary / inefficient to send local traffic to my WAN and then through nginx first.

I want to be able to access nextcloud directly on my local network via it’s local IP (or local hostname), and also be able to access nextcloud externally with SSL encrypted traffic on a domain I own.

Details:

Initially I set it up to point to Nextcloud via the proxy_pass directive, and was getting a bad request error. To be honest, there were several errors that I tried to eliminate via a few guides and forum posts. Here are the steps I took to get it working.

I ran a snap command to disable https

I added a trusted domain

sudo snap run nextcloud.occ config:system:set trusted_domains 1 –value=your.fancy.domain

It loaded for a minute but then threw an error. So I followed the main snap page instructions to overwrite the host

sudo nextcloud.occ config:system:set overwritehost --value=“custom.example.com

I was having a hard time with the android app after this (getting a malformed server configuration error).

I ended up scrapping Nginx config and following a guide, which also involved changing some settings in the nextcloud php config

Here is my nginx config:

#after hours of abuse got this one working from https://breuer.dev/tutorial/Setup-NextCloud-FrontEnd-Nginx-SSL-Backend-Apache2 
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    ssl_certificate /etc/letsencrypt/live/custom.example.com/fullchain.pem; 
    ssl_certificate_key /etc/letsencrypt/live/custom.example.com/privkey.pem; 
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 
 
    server_name "custom.example.com";

    client_max_body_size 0;
    underscores_in_headers on;

    location ~ {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        add_header Front-End-Https on;

        proxy_headers_hash_max_size 512;
        proxy_headers_hash_bucket_size 64;

        proxy_buffering off;
        proxy_redirect off;
        proxy_max_temp_file_size 0;
        proxy_pass http://192.168.20.15;
     }
        location = /.well-known/carddav { return 301 /remote.php/dav/; }
        location = /.well-known/caldav  { return 301 /remote.php/dav/; }
    
}


Here is the only thing I added / changed in the nextcloud php config:

'trusted_domains' => 
  array (
    4 => 'localhost',
    1 => 'custom.domain.com',
    2 => '192.168.20.15',
    3 => 'nextcloud',
  ),

  'overwritehost' => 'custom.domain.com',
  'overwriteprotocol' => 'https',
  'overwritewebroot' => '/',
  'overwrite.cli.url' => 'https://custom.domain.com/',
  'htaccess.RewriteBase' => '/',
  'trusted_proxies' => 
  array (
    0 => '192.166.6.2',

Guess I’m stuck here now. I’m pretty sure the local ip is redirecting to external domain because I’ve told it to… but I couldn’t get the external domain working without this. Any help would be greatly appreciated.

Hello,

Please wait for others to reply.

I am going little off-topic. I have a similar setup as yours. Not exactly same but below is how I got it working,

NextCloud (Ubuntu22+Snap) and Nginx Reverse Proxy Manager (Ubuntu+Docker) are running on their individual separate VMs (Network on Bridge Mode for direct NAT IP from Router).

NextCloud is set with self signed SSL / Nginx Reverse Proxy Manager terminates Let’s Encrypt SSL

Nginx binds the domain to NextCloud VM router given NAT IP.

Thanks.

You are right that it is less efficient and it can also be slower. But that depends mainly on how well your router handles NAT reflection. The more elegant solution would be to use DNS Host Overrides on a local DNS server such as Pi-hole (dnsmasq) or Unbound that will resolve the URL to the internal IP address of your Nextcloud server, respectively to your NGINX proxy.

Using the IP address however is the worst of all options imho and comes with several drawbacks:

  • Mobile apps need to use different URLs depending on whether they are connected to the internal or an external network
  • If you create public shares and want to send the link to someone outside your local network, you have to maually change the URL
  • You can only use selfsigned certificates, if you’re using the IP address
1 Like

take a look at the post referred, you will find a best practice solution using Split-brain-DNS there:

I understand and appreciate the suggestions to use split DNS. I guess I’m suspecting that’s not the root issue I have.

Just to be sure, I tried adding a host override on pfsense, and disabling NAT reflection, and I get a “connection refused” error.

I think the problem may have to do with the “hostoverwrite” directive in nextcloud… which seemed to be required to get nextcloud to listen to nginx over HTTPS.

So maybe I need to take a step back and focus on the root problem there?

I have another webserver that worked like this pretty much out of the box:

I could connect to the local hostname via web browser on http. Let’s say http:// localhostname

I set up an Nginx proxy for external access, and pointed it to the local webserver.

Then I could access it via nginx at https:// external.domain.com or directly via http://localhostname (bypassing nginx and not using ssl)

Currently, if I try the same with Nextcloud, I navigate to Nextcloud server at http:// localhostname, and it redirects me to https:// external.domain.com.

Some maybe that is the configuration changes in config.php doing that? And if so, then I need to figure out an alternate way to get pass the “bad request’” errors I was getting with the default php config.

Does this all sound sane?

Sorry I don’t use a reverse proxy configuration for my Nextcloud and therfore I’m not sure what exatctley causes the issue. But I don’t see any compelling reason why you would bypass the reverse proxy in the first place. Your goal, in my opinion, should be to get split-brain DNS working properly. Once you got that working, there is no reason to bypass the reverse proxy anymore… But maybe I’m missing something here…

Hmm so I don’t think that I “need” to bypass nginx.

There are a couple reasons I’d like to though.

One is that it’s inefficient. I don’t need to route all traffic through an extra hop when the box is sitting right there on my local network. And from a security standpoint, I don’t see much issue accessing a local webserver with a self signed cert that isn’t exposed to the outside world.

Probably the bigger reason is its one more point of failure, where if things go awry with nginx / DNS / domain, whatever… then I can’t access nextcloud at all.

NAT reflection is working right now. I don’t see how to get split brain working without addressing the issue I’m facing, as with a proper split brain, the extern domain will redirect to the local.hostname, which will just redirect to external domain, and so on. I guess I may just need to have another go at tweaking the settings in nextcloud. I have some familiarity with nginx and almost none with php. I imagine for the overwrite host some kind of wildcard hostname could do the trick…

Now I’m confused. Is the NGINX Reverse Proxy hosted outside your loacal network?

But why? If both the reverse proxy and the Nextcloud server are hosted locally, you won’t get any performance gains compared to Split-DNS. With NAT Reflection it depends on the router. Also the traffic never leaves your network in both cases. With split DNS it doesn’t even hit the router but only the switch.

No but it complicates things unnecessarily while you get no real benefits in return.

1 Like

No, NGINX is on my network. And yes split brain would be better than NAT reflection. So I need to get split brain working then. Not disagreeing… just saying that if split brain works, then I will also likely be able to access NC directly, as I think the issue is related. I may not have time to experiment this weekend but will be sure to write back here if I find a solution.

overwritehost directive is required for Nextcloud as it is not “stateless” like another Webserver it is kind of “interactive” and it needs to know how you access it to build proper navigation links… this is the reason why you can’t access it using different URLs (it will always redirect you/load assets from the overwritehost address) as you see already:

The only good way is to make external public URL accessible within you internal network as well (there are different aspects e.g. using same public TLS certificate for all connection). you can do Split-Brain DNS, send all the traffic through the internet etc. - depending on your network and available equipment the one or other solution could be better.

In theory you are are right - having shorter way has less failure points. In my eyes using same data flow is worth this additional reverse proxy box - this simplifies configuration and troubleshooting - as you only need to troubleshoot exactly one same path and independent from the fact where you client is sitting. and this is efficient enough: I bet you can’t even measure significant slowdown and definitely a human doesn’t notice it.

This is common misunderstanding (caused by browser vendors) self-signed certificate protects your traffic equally as the public one. The only difference is trust - common sense is to treat established public CA as trusted and assume their certificates validate the owner of web server good enough - which in general does not apply to self-signed certificates… but given the fact you issued this certificate yourself this one is likely much more trusted in your eyes then every other cert in the world… but unfortunately there is no easy way to explain all your devices you trust this self-signed certificate… and the way becomes harder and harder every day… especially if your not the only user of the system…

I think this is the problem. I played around with it some more this morning and am about throwing the towel in. So I’m interpreting this as meaning there is no way for me to access nextcloud from both local.hostname & external.domain ?

I tried redoing my nginx config again. First I re-enabled https on nextcloud. And then I was hitting “too many redirects” issue again. Adding a location block

    if ($scheme = http) {
       return 301 https://$server_name$request_uri;

Resolved that, but then I was faced with bad gateway 502 errors. I’ve read quite a bit of posts but cant get to a solution.

Guess I figured I could have the reverse proxy redirect traffic to local.hostname while retaining directories using external.domain for the user. I’ve got homeassistant set up like this, where I can access it both from the local hostname and from a public domain.

I really don’t get the point why you are trying soo hard to keep using two different hostnames…
For me there is no reason to do so… I’m sorry can’t help you with your problem…

3 Likes

The problem with lokal and remote Ip Adress is if you use for exampleTalk:
when a lcal user invites someone external to talk this will not be possible.
So you have to deal with login local rest local or login external and have all features.

same problem here.
i’d like to be able to use my nextcloud even when internet connection is down. this happens pretty often, so I can’t resolve my local service because DNS isn’t accessible. for example Home Assistant allows using both external DNS name and internal unprotected HTTP connection via local ip.

you can. there are lot of solutions, some routers have integrated DNS server, you can run Pihole, Adguard and many other DNS servers/resolvers and keep resolving local systems even in case your internet connection is broken.

1 Like