I am running nextcloud on ubuntu 16.04 with MariaDB, php-fpm 7.0 and Nginx.
The system is secured by a Sophos UTM firewall, I am using a SSL certificate and a dyndns service to access nextcloud.
The only problem is, that I get the following “warning/error” if I access nextcloud from an external IP address.
“Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken”
Accessing nextcloud from an internal IP address is working. In both cases I am using the dyndns address to access nextcloud.
Also I can access, edit and delete files from an external IP address. But I can’t create a new file from an external IP address (forbidden). Same error if I access nextcloud with the owncloud client from an external IP address (Error downloading https://SERVER_IP:PORT/remote.php/webdav/ - server replied: Forbidden.
I think the problem is my nginx config. I am using the default nginx config from nextcloud https://docs.nextcloud.com/server/9/admin_manual/installation/nginx_nextcloud_9x.html, but I had to change the upstream php-handler to: server unix:/var/run/php/php7.0-fpm.sock;
Without the change, I got a bad gateway error while accessing nextcloud from an internal IP address and a forbidden error while accessing nextcloud from an external IP address.
The problem is caused by the Sophos UTM Common Threats filter. If I deactive this, I don’t get the webdav error.
Confirmed. I was also unable to sync files larger than 10MB using the Windows Desktop client.
Disabling the following filter categories fixed it:
- Protocol anomalies
- HTTP policy
- SQL Injection attacks
- XSS attacks
!Rules only for HTTPS, HTTP is redirected.
To update this thread again, these are my current settings. The values are switched on (Protocol anomalies, SQL Injection, XSS attacks), only HTTP policy are off!
Da das Thema mit den Werten so nicht mehr stimmt, kleine Anmerkung zu den hier genannten Einstellung, die so nicht mehr stimmen. Ich habe nur HTTP policy aus. Protocol anomalies, SQL Injection attacks und XSS attacks sind bei mir erfolgreich aktiv!
Set up in the following order / In folgender Reihenfolge eingerichtet:
In addition, I will post my running configuration for the Sophos UTM 9.
Firmware version: 9.705-3
Die ersten 2 “Skip Filter Rules” sind notwendig gewesen, da man sonst von außen keine Dateien hochladen kann.
Die restlichen Skip Rules sind für Benachrichtigungen aufs Handy notwendig.
SQL Injection deaktiviert, da es sonst Probleme mit komplexen Passwörtern gibt. Das wird von Nextcloud über Prepared Statements verhindert.
XSS attacks deaktiviert, weil sonst manchmal ein “Forbidden” auf der Loginseite erschien.
The first 2 “Skip Filter Rules” were mandatory for uploading files from external.
The other skip rules are mandatory for working notifications to the Nextcloud Android App.
SQL Injection is deactivated, because otherwise some user had problems with complex passwords (containign " ’ "). There shouldn’t be a security concern because Nextcloud uses “Prepared Statements”. Search for it in the Nextcloud docs.
XSS attacks is deactivated, because we had some users that got a “Forbidden” on the logon page.
What do we have?
- Android App
Hope this helps someone
Good timing. I am in the process of switching my set up over from DNAT to WAF.
How are you handling the tls certs. That is, utm gives option for generating lets encrypt certs on its own. But also the NC instance has certs. Same FQDN for both? Confused… Or internally (real web server) has cert issued to local domain (nextcloud.local)? How does that get validated?
In my configuration:
Every virtual webserver (UTM) has a wildcard certificate in place.
Through our PKI I have created a certificate for the Nextcloud server. (nextcloud.local)
Can you offer more details on how you did that? I understand the process of a wildcard cert for LE but don’t follow how the cert for .local was created.
Sorry I mixed some things there.
I ment, I used our CA certificate on the Nextcloud server.
I can’t help more, my knowledge about certificates is realy poor. Sorry for that.
Ref : https://community.sophos.com/utm-firewall/f/web-server-security/50424/ssl-tls-offload
If my interpretation is correct, with https configured for virtual server, only the cert assigned to that virtual server gets passed to a connecting client, even if a self signed cert is used for the real server.
If the real server has its own cert defined, then virtual server should should use port 443 but be defined as http. The real server should be defined as https. This would create a passthrough scenario. It seems this implementation would defeat the protections offered by the WAF.