Docker, reverse proxy, and trusted domains

At this time, I am trying to understand which variables should take which values for us to expect the application to behave as desired.

Once we reach that point, it may be helpful for me to share logs, configuration files, and so. For the time being, I would rather get advice on the proper settings for the variables. I have no reason for not setting trusted_proxies other than not currently having reached an understanding of what value it should have for the particular configuration.

Unfortunately, as I say, the various documents have caused me confusion about how each variable is used in different contexts.

As long as I am not understanding what value the variables should take, I feel that screenshots and logs are more likely to be distracting than helpful. Thanks.

Uuhhmm…okay :roll_eyes:. I guess we all need to decide what our troubleshooting methodology looks like. I’ll leave up the link to my configuration, it may help others.

Good luck!

@conradp24: I think your idea of troubleshooting methodology is fine. It may be more helpful, however, to think of my inquiry as about understanding the meanings of the configuration variables, rather than as about troubleshooting.

did you read Nexcloud documentation? all the variables are clear described there. docker documentation explains how to use them in docker environment

the variables clearly described there:

  • overwritehost set the hostname of the proxy. You can also specify a port.
  • overwriteprotocol set the protocol of the proxy. You can choose between the two options http and https.
  • overwritewebroot set the absolute web path of the proxy to the Nextcloud folder.
  • overwritecondaddr overwrite the values dependent on the remote address. The value must be a regular expression of the IP addresses of the proxy. This is useful when you use a reverse SSL proxy only for https access and you want to use the automatic detection for http access.

If you still hit some issue please describe the problem in detail, so we can understand it and help you to solve it.

I had read the documentation, and found it quite confusing.

The most recent problem was resolved by setting overwriteprotocol to https, though it seems to me this particular inference is impossible from the documentation alone. At present, I believe the entire installation is functioning, at least with respect to the issues immediate to my question.

I would like to express my view that the documentation for the container might be improved greatly through giving it a more direct and accessible structure, as well as clarification of how the various explanations for container usage relate to the general usage of the application. A helpful approach might be giving a clear indication of whether any explanation of the former simply reiterates the design of the application generally, or rather illustrates augmented or distinctive behavior compared to the latter.

Comments such as the following would be helpful, in the appropriate case:

  • The following summarizes Nextcloud behavior, which is described in greater detail in the general manual.

  • Nextcloud running from the official Docker container behaves differently from a regular installation the following ways.

The documentation for the configuration options in the general manual, for the most part, also is confusing and unclear.

Generally, helpful documentation of a configuration parameter includes answers to a combination of several of the following questions:

  • Which components read the value?
  • When do they read the value?
  • How does the value affect their behavior?
  • In which cases should the user or administrator override the default, and what considerations might be applied to selecting a desired value?
  • What is the context that makes this parameter a helpful part of the overall design?

Following are some examples, for various parameters, of documentation that might be much more helpful than those in the current revision of the manual:

  • overwriteprotocol: May be set to either of http or https.

    Nextcloud normally assumes that the client should use same protocol, that is, secure versus insecure HTTP, for access to Nextcloud, as used by any proxy, and will try to change the address for client access accordingly, by returning a HTTP redirect directive each time the client uses the other protocol. If the protocol for client access is different than for access by the proxy , then this this value must be set to the protocol to be used by the client. For example, if the connection between Nextcloud and the proxy is insecure, but clients are to use a secure connection, then this value must be set to https.

  • trusted_hosts: The host names, as appears in the address, clients are allowed to use for accessing Nextcloud.

    For security, Nextcloud allows access only through host names designated in the site configuration. Access through other host names will generate an error page. This value always should be set to the allowed host names for access directly by the client, even if the client is accessing Nextcloud through a reverse proxy. Note this parameter is different from trusted_domains, which establishes trust for the reverse proxy.

2 Likes

I agree this might be be described better for beginners… You went a hard way to understand how it works - please write either a guide here or even better create a pull request with improved version of the manual…

My summary - Nextcloud running behind reverse proxy (especially docker) has no idea how the client can reach resources like files…

say you have a docker container called nextcloud and you access it using public DNS record https://mynextcloud.dyndns.xyz. you reverse proxy traefik/nginx/apache terminates TLS and sends all requests without TLS to your nextcloud container. by default Nextcloud would use the protocol of incoming request and build the URL for *mycat.png as http://nextcloud/files/mycat.png to make URLs on Nextcloud work your container must build URLs like https://mynextcloud.dyndns.xyz/files/pictureofmycat.png where

  • rewrite http:// to https:// is controlled by OVERWRITEPROTOCOL
  • rewrite nextcloud to mynextcloud.dyndns.xyz is controlled by OVERWRITEHOST
  • and trusted_proxies define the proxy host which is allowed to perform requests for another clients
1 Like

Based on certain comments in the documentation, I came to understand that many proxies save information about the original request in extended header fields. Might Nextcloud apply the values of those fields to resolve a resource address appropriate for the original request form the client?

trusted_proxies variable instructs Nextcloud to check x_forwarded_for (and x_real_ip?) flags. but maybe not every piece of information is passed (by every proxy). I think the overwrite* settings are the most stable solution.

My suggestion is to use whatever fields are made available by the proxy to guess as accurately as possible how to reconstruct the client address, even if those same fields may not be available from other proxies. This design simplifies site configuration in many cases. The main drawback is reduced portability of the same application deployment through different proxies, but I suggest that this problem is a smaller inconvenience compared to the alternative, since changing proxies on the same site is not a common event.

With respect to the logic currently in place, explained to some degree in the manual, and which you have tried to clarify, I remain very confused. Hopefully, someone who understands the design would be able to update the manual to make it more helpful for those with limited familiarity of these systems.

If I might make one more suggestion, rather than separate fields for hosts and protocols, I would find it simpler to provide a list of full addresses (e.g. http://cloud.domain.xyz) of possible base addresses for client access, and let the application choose the best match from the list based on the fields available in the request header.

I agree full fqdn like “base URL” (full address in your words) would be the simplest solution both for administration, programming and understanding… most likely current situation results from history and as everybody knows how it works there is little motivation to change it… :stuck_out_tongue: I assume multiple different base URL are not really common so there is no common need to add this…

You are welcome to adopt the code and file a pull request at Github.

Well, I never feel good when someone comments “everyone knows”, followed by something I don’t know.

I believe it is common, actually. Many hosts are accessed by different names, or protocols, especially when the same one is used both within and beyond a local network.

I got a new problem after I think I solved the proxy one similar to brainchild:

Your data directory and files are probably accessible from the internet because the .htaccess file does not work. For information how to properly configure your server, please see the documentation.

I found the following potential solution, however, my brain is burn and I dont know what to do with this: [FIXED] Htaccess file is not working

I try to edit the config.php by accessing docker from the command prompt and powershell (Win10) however once reached the files, neither nano config.php or vi config.php would work, saying it wasnt a “bash” command or something to that end…I also tried notepad config.php and notepad.exe config.php with the same fate! kill me now!

Thanks to @wwe and everyone else that posted… I did fix one one of issues similar to brainchild after reading through the post and reading the documentation exhaustively. @brainchild not sure if you fix your issue but adding the overwrite host/protocol/webroot to mine did the trick - I’m sharing my config in case it helps someone else.

.
Potential Solution to ProxyPass issue

My Setup:

  • Machine A: Win 10 + NGINX proxy server with SSL installed
  • Machine B: Win 10 + Docker for Windows + Portainer
  • Portainer: helps me install docker compose under the “stack” option (maybe this isn’t new for anybody, but took me a while to figure it out :slight_smile: )

References for the example:

Steps:

  • Proxy machine A was already working and serving content secure with a similar setup for an awesome feed aggregator called TinyTinyRss which is installed under Machine B - I wont go to deep into details about my proxy setup but config shown below for reference
  • I deployed the dockerized example file shown below (with some hiccups as I was experience the same issues as @brainchild - however, I think I have overcome those and now have a new one mentioned above - htaccess!
  • As mentioned: computer A proxies computer B
  • I have other stacks deployed aside from nextcloud and working fine through this same system (TTRSs - yes, completely network overload and inefficient but needed for my setup and cant figure other way for now…dont judge me :frowning: lol)
  • links included from where I found some info:

version: "3" 
# some from: https://www.cloudsavvyit.com/12476/how-to-self-host-a-collaborative-cloud-with-nextcloud-and-docker/    

services:
  nextcloud:
    image: nextcloud:latest # not sure if this is the best/optimal choice - recommendations/comments welcome!
    restart: always
    ports:
      - 8080:80
    environment:
      - MYSQL_HOST=db
      - MYSQL_DATABASE=db
      - MYSQL_USER=nextcloud
      - MYSQL_PASSWORD=passExample
      - NEXTCLOUD_TRUSTED_DOMAINS=localhost example.mainsite.com # each option should be separate by a space as per documentation - not sure if I need more/less at this point
      - TRUSTED_PROXIES=192.168.1.A example.mainsite.com # each option should be separate by a space as per documentation - not sure if I need more/less at this point
      - OVERWRITEHOST=example.mainsite.com # from: https://help.nextcloud.com/t/docker-reverse-proxy-and-trusted-domains/117891/26
      - OVERWRITEPROTOCOL=https
      - OVERWRITEWEBROOT=nextcloud  # from documentation and help from the forum - thank U!
    volumes:
      - nextcloud:/var/www/html

  db:
    image: mariadb 
    restart: always
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    volumes:
      - db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=passExample
      - MYSQL_PASSWORD=passExample
      - MYSQL_DATABASE=db
      - MYSQL_USER=nextcloud

volumes:
  db:
  nextcloud:

.

Partial Proxy Pass config from nginx:

#
### NextCloud ###
location /nextcloud/ {
    rewrite /nextcloud/(.*) /$1 break;
    proxy_pass http://192.168.1.B:8080/;

    proxy_redirect http://192.168.1.B/ https://;
    proxy_http_version 1.1;
    proxy_set_header Host $host; 
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
}
#

Hey guys , I think i am in the same situation. I am using ngrok which is a reverse proxy. abasically what it does is that it gives your localhost a url which you can use to access it from anywhere. Once i have the url i get the trusted doamin error when i click on the link.