After upgrade from 28 to 29 I have "Data directory and your files are probably accessible from the Internet"

Hello all,

After upgrade NC from 28 to 29 This message appears in alerts:

“Your data directory and your files are probably accessible from the Internet. The .htaccess file is not working. We strongly suggest that you configure your web server in a way that the data directory is no longer accessible or you move the data directory outside the web server document root.”

I see a lots of things on this topic with apache but i’m using nginx and nothing solve my problem!

Data never has been located in the root web server, so basically I think the message is a false positive but who know!

here is the setting of my /config/nginx/site-confs/default.conf

upstream php-handler {
server {
    listen 80;
    listen [::]:80;
    server_name _;
    return 301 https://$host$request_uri;
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name _;
    ssl_certificate /config/keys/cert.crt;
    ssl_certificate_key /config/keys/cert.key;

    # Add headers to serve security related headers
    # Before enabling Strict-Transport-Security headers please read into this
    # topic first.
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
    # WARNING: Only add the preload option once you read about
    # the consequences in This option
    # will add the domain to a hardcoded list that is shipped
    # in all major browsers and getting removed from this list
    # could take several months.

    # set max upload size
    client_max_body_size 512M;
    fastcgi_buffers 64 4K;

    # Enable gzip but do not remove ETag headers
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/ application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

    # HTTP response headers borrowed from Nextcloud `.htaccess`
    add_header Referrer-Policy                      "no-referrer"   always;
    add_header X-Content-Type-Options               "nosniff"       always;
    add_header X-Download-Options                   "noopen"        always;
    add_header X-Frame-Options                      "SAMEORIGIN"    always;
    add_header X-Permitted-Cross-Domain-Policies    "none"          always;
    add_header X-Robots-Tag                         "noindex, nofollow"          always;
    add_header X-XSS-Protection                     "1; mode=block" always;

    # Remove X-Powered-By, which is an information leak
    fastcgi_hide_header X-Powered-By;

    root /app/www/public;

    # display real ip in nginx logs when connected through reverse proxy via docker network
    real_ip_header X-Forwarded-For;

    # Specify how to handle directories -- specifying `/index.php$request_uri`
    # here as the fallback means that Nginx always exhibits the desired behaviour
    # when a client requests a path that corresponds to a directory that exists
    # on the server. In particular, if that directory contains an index.php file,
    # that file is correctly served; if it doesn't, then the request is passed to
    # the front-end controller. This consistent behaviour means that we don't need
    # to specify custom rules for certain paths (e.g. images and other assets,
    # `/updater`, `/ocm-provider`, `/ocs-provider`), and thus
    # `try_files $uri $uri/ /index.php$request_uri`
    # always provides the desired behaviour.
    index index.php index.html /index.php$request_uri;

    # Rule borrowed from `.htaccess` to handle Microsoft DAV clients
    location = / {
        if ( $http_user_agent ~ ^DavClnt ) {
            return 302 /remote.php/webdav/$is_args$args;

    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;

    # Make a regex exception for `/.well-known` so that clients can still
    # access it despite the existence of the regex rule
    # `location ~ /(\.|autotest|...)` which would otherwise handle requests
    # for `/.well-known`.
    location ^~ /.well-known {
        # The following 6 rules are borrowed from `.htaccess`

        location = /.well-known/carddav     { return 301 /remote.php/dav/; }
        location = /.well-known/caldav      { return 301 /remote.php/dav/; }
        # Anything else is dynamically handled by Nextcloud
        location ^~ /.well-known            { return 301 /index.php$uri; }

        try_files $uri $uri/ =404;

    # Rules borrowed from `.htaccess` to hide certain paths from clients
    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/)  { return 404; }
    location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console)              { return 404; }

    # Ensure this block, which passes PHP files to the PHP process, is above the blocks
    # which handle static assets (as seen below). If this block is not declared first,
    # then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
    # to the URI, resulting in a HTTP 500 error response.
    location ~ \.php(?:$|/) {
        fastcgi_split_path_info ^(.+?\.php)(/.*)$;
        set $path_info $fastcgi_path_info;

        try_files $fastcgi_script_name =404;

        include /etc/nginx/fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $path_info;
        fastcgi_param HTTPS on;

        fastcgi_param modHeadersAvailable true;         # Avoid sending the security headers twice
        fastcgi_param front_controller_active true;     # Enable pretty urls
        fastcgi_pass php-handler;

        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;

    location ~ \.(?:css|js|svg|gif)$ {
        try_files $uri /index.php$request_uri;
        expires 6M;         # Cache-Control policy borrowed from `.htaccess`
        access_log off;     # Optional: Don't log access to assets

    location ~ \.woff2?$ {
        try_files $uri /index.php$request_uri;
        expires 7d;         # Cache-Control policy borrowed from `.htaccess`
        access_log off;     # Optional: Don't log access to assets

    location / {
        try_files $uri $uri/ /index.php$request_uri;

    location ~ ^/(data|config|\.ht|db_structure\.xml|README) {
                deny all;


Thanks in advance for your help!

The way this check operates changed. It now runs server-side rather than client-side. As a result it runs the check via all your configured trusted_domains and overwrite.cli.url. Any chance you have some entries there that don’t belong or perhaps even resolve (via DNS) to somewhere other than your Nextcloud Server or something?


Hi, I’m not the OP, but I am facing the same issue as he is. I’m using docker on a Synology NAS at home, and serving it to the internet using DDNS. Just like the OP, it started occurring after upgrading from 28 to 29.

My trusted domains array and overwrite.cli.url is set to my subdomain URL as well as the NAS’s internal IP. However, thinking about it, the subdomain uRL would resolve to my network’s public IP address, which is at the router, NOT at the NAS machine. Is this the culprit?

There ist a Thread in German language but Apache htaccess-Datei funktioniert nicht

I also have this issue, after running NC for the last couple of years. What file should we check and/or change?

Check in config.php File in Nextcloud/config Folder:

  1. Trusted Domains

for example:

‘trusted_domains’ =>
array (
0 => ‘’

  1. Trusted Proxies

for example:

‘trusted_proxies’ =>
array (
0 => ‘’

  1. overwrite.cli.url

for example:

‘overwrite.cli.url’ => ‘’,

  1. overwriteprotocol

for example:

‘overwriteprotocol’ => ‘https’,

Here is a Link for Explanation config.php:
Configuration Parameters — Nextcloud latest Administration Manual latest documentation


I have 2 NC instances with the same problem.
One is situated in a private cloud provider with a dedicated DNS and the second at home with dyndns DNS.
I don"t see bad things with trusted_domains and overwrite.cli.url. :thinking:

I precise i’m using a docker container from [] and I also update image for both instances in the same time, I check on that part if there is a problem…

Niiiiiiiice ! It’s work for me ! It was a problem of trusted_domains…
And one less problem!

1 Like

Got the same error after upgrading to Hub29. I am using nextcloud docker with an nginx as reverse proxy (which is on a different host). I don’t have the trusted_proxy config setting at all. My config.php is set as follows:

  'trusted_domains' => 
  array (
    0 => ',

  'overwrite.cli.url' => '',
  'overwriteprotocol' => 'https',

If I remove trusted_domain, nextcloud doesn’t work. If I remove overwrite.cli.url and overwriteprotocol nextcloud still works but shows a lot of other errors. Really struggling how to fix this while also being worried that data is exposed.

Found this Frequent Nextcloud 29 (Hub 8) update issues - but it doesn’t really help.

There are some Website Security Scanners Out there. Checking your Website for vulnerabilit.
Google for Website SSL Check and Look out for some serious scanner

Same issue here. I’m using the recommended swag nginx config instructions:

## Version 2024/04/25
# make sure that your nextcloud container is named nextcloud
# make sure that your dns has a cname set for nextcloud
# assuming this container is called "swag", edit your nextcloud container's config
# located at /config/www/nextcloud/config/config.php and add the following lines before the ");":
#  'trusted_proxies' => [gethostbyname('swag')],
#  'overwrite.cli.url' => '',
#  'overwritehost' => '',
#  'overwriteprotocol' => 'https',
# Also don't forget to add your domain name to the trusted domains array. It should look somewhat like this:
#  array (
#    0 => '', # This line may look different on your setup, don't modify it.
#    1 => '',
#  ),```

The solution for me was to remove the entry 'localhost' from 'trusted_domains' in config.php.

Not solving my issue on my part!
Thanks anyway :slightly_smiling_face:

Guys, my solution.
Configure properly your reverse proxy settings in config.php
for example, I am using Cloudflare as a main reverse proxy
thus I have to describe all possible IP addresses that can send the request to my Nextloud server.
‘trusted_proxies’ => [‘’, ‘’, ‘’, …]

When ‘trusted_proxies’ are presented must include also in the config.php the parameter ‘forwarded_for_headers’ => [‘HTTP_X_FORWARDED’, ‘HTTP_FORWARDED_FOR’],
Therefore all the above will not only fix the error but also protect your Nextcloud from various attacks.

Good luck all of you :wink:

1 Like

Don’t remove it. You should have at least one entry in there: the domain you use to access Nextcloud.

Since you’re using Docker, please go into the container and determine whether resolves to the same IP address as it does from outside of your container.

I can’t remove ‘trusted_domain’ anyway, because otherwise nextcloud won’t work at all. However, whatever IP I put in ‘trusted_proxies’ I still get the error.