Let's encrypt certificate renewal fails


Nextcloud version (eg, 18.0.2): 18.0.4
Operating system and version (eg, Ubuntu 20.04): Ubuntu 18.0.4 LTS
Apache or nginx version (eg, Apache 2.4.25): nginx 1.14.0
PHP version (eg, 7.1): php 7.2.30

The issue you are facing:
Let’s encrypt renewal fails. Also see: Let’s Encrypt renewal fails

Is this the first time you’ve seen this error? (Y/N): yes

Steps to replicate it:

  1. run certbot renew

The output of your Nextcloud log in Admin > Logging:

No errors pertaining to certbot

The output of your config.php file in /path/to/nextcloud (make sure you remove any identifiable information!):

$CONFIG = array (
  'blacklisted_files' => 
  array (
    0 => '.htaccess',
    1 => 'Thumbs.db',
    2 => 'thumbs.db',
  'instanceid' => 'random',
  'passwordsalt' => 'pepper',
  'secret' => 'reallysecret',
  'trusted_domains' => 
  array (
    0 => 'home.mecallie.com',
  'datadirectory' => '/var/nc_data/',
  'overwrite.cli.url' => 'https://home.mecallie.com',
  'dbtype' => 'mysql',
  'version' => '',
  'dbname' => 'db',
  'dbhost' => 'localhost',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'mysql.utf8mb4' => true,
  'dbuser' => '*',
  'dbpassword' => '*',
  'installed' => true,
  'enable_previews' => true,
  'filesystem_check_changes' => 0,
  'filelocking.enabled' => 'true',
  'htaccess.RewriteBase' => '/',
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'preview_max_x' => 1024,
  'preview_max_y' => 768,
  'preview_max_scale_factor' => 1,
  'mail_from_address' => 'nextcloud',
  'mail_smtpmode' => 'smtp',
  'mail_smtpauthtype' => 'LOGIN',
  'mail_domain' => 'mecallie.com',
  'maintenance' => false,
  'theme' => '',
  'loglevel' => 2,
  'mail_smtpsecure' => 'ssl',
  'mail_smtpauth' => 1,
  'mail_smtphost' => 'mail.host.nl',
  'mail_smtpport' => '465',
  'mail_smtpname' => 'mark@mecallie.com',
  'mail_smtppassword' => '*',
  'twofactor_enforced' => 'true',
  'twofactor_enforced_groups' => 
  array (
  'twofactor_enforced_excluded_groups' => 
  array (
  'mail_sendmailmode' => 'smtp',
  'app_install_overwrite' => 
  array (
    0 => 'calendar',
    1 => 'tasks',
    2 => 'radio',
    3 => 'occweb',
    4 => 'dicomviewer',

The output of the certbot command:

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx

Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: home.mecallie.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1
Cert is due for renewal, auto-renewing...
Renewing an existing certificate
Performing the following challenges:
http-01 challenge for home.mecallie.com
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. home.mecallie.com (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: During secondary validation: Fetching http://home.mecallie.com/.well-known/acme-challenge/f8fytAqbouZpKrlHgVkFYj3BGNW0ZjKywPXlNIxKozI: Error getting validation data

 - The following errors were reported by the server:

   Domain: home.mecallie.com
   Type:   connection
   Detail: During secondary validation: Fetching
   Error getting validation data

   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A/AAAA record(s) for that domain
   contain(s) the right IP address. Additionally, please check that
   your computer has a publicly routable IP address and that no
   firewalls are preventing the server from communicating with the
   client. If you're using the webroot plugin, you should also verify
   that you are serving files from the webroot path you provided.

I did not change anything on my setup that I am aware of. I did however restore a backup of the entire server from a snapshot after I lost the disk it was running on. The server is up and running, but the renewal fails everytime. The weird thing is: the renewal with --dry-run passes…

No one here can help me troubleshoot this issue?

I also noticed that my Calenders and Contacts refuse to sync with my phone. Access denied error. My nextcloud itself runs fine though. I have checked the .well-known locations, but that are mentioned in nginx.conf. And nothing should have changed there correct? Any hints are appreciated!

This is saying it could not connect to your server on port 80. It’s required for certbot.

I know it cannot. That’s what the person on the Letsencrypt forum told me as well. But I for the love of me do not understand WHY? And why does the --dry-run not fail? Surely that will test port 80 as well?

I have forwarded port 80 on my router (as it has been for over two years). When I go to my site via http I am redirected to https. Works fine. I did not change anything on the server where Nextcloud is running, nor did I change anything on the Nextcloud config itself.

I have checked that the firewall on the server ufw is still open.

I really do not think this is a matter of the port not being open. I have tried placing a text file in the /var/www/letsencrypt/ to see if I can open that via http, but I have no clue what the url would then be. Something like http://home.mecallie.com/well-known/acme-challenge/ ? That seems to simpy redirect my to my nextcloud home :frowning:

Hmm. You could try using zerossl.com and doing the HTTP verification manually to isolate whether the problem is with certbot.

Had this error once with Cloudflare proxying my web server and with my firewall at home (Sophos SG) trying to fetch its own certificate, so ports 80 and 443 had to been available for the FW.

Usually, with 80 and 443 TCP NAT-ted towards the internal IP of your instance (fitting with the configured hostname) everything should work, mostly the error is something not directly connected to your Nextcloud.

Probably for security reasons. You need to demonstrate you have control of the server. Any random low privilege user could trick cerbot into granting a certificate to a domain using a non-standard port, so Let’s Encrypt requires the user to demonstrate control of port 80.

Nowhere. You can’t access Let’s Encrypt certificates or keys from a URL like that, you should try sticking a file in your Nextcloud path (you know, where your .php files can be found).

Thanks. As I said: firewall ports 80 and 443 are open. As they have been.
But Let’s encrypt is giving a 403 error: that means forbidden. If the port was closed it would give a 404.

I have double checked to see if my firewall does ssl interception: it’s off.

I don’t know why it is not accepting. I know why certbot wants to use ports 80 and 443.

Nowhere. You can’t access Let’s Encrypt certificates or keys from a URL like that, you should try sticking a file in your Nextcloud path (you know, where your .php files can be found).

If you where to take a look at how certbot works you would see that it places a challenge/key in the directory that is specified when it tries to renew a certificate. That is to make sure that the dns is still pointing to the correct webserver. Works much the same as the dns challenge, except the key is stored on your webserver, not your dns server.

When I look at my config, the .well-known/acme-challenge/ location is pointing at /var/www/letsencrypt/ . That is where Let’s encrypt is storing the temporary key to check if the config/dns is still valid. So I SHOULD be able to place a random file there and read it from Nextcloud, otherwise I would never be able to renew my certificate.

I just need to know how to open up that directory to the public in NGINX. It might even be that the rights on the directory itself are not correct, I don’t know if nginx would give a 403 then.

I know how cerbot works, I’ve been using it since 2016 when it was still called letsencrypt.

I know that, but you literally said you were placing a file in “/var/www/letsencrypt/”, not your Nextcloud path.

No, DNS authentication is completely different, and requires that you place a TXT value in your DNS service. Webroot authentication places a file on your webserver (in the website’s path), demonstrating you have control of the server.

DNS auth demonstrates you have control of the domain, while webroot demonstrates control of the webserver. These are similar concepts but operate completely differently in practice.

You appear to be confusing the two, and you don’t seem to understand where your Nextcloud instance resides. For example, if your Nextcloud instance is located in /var/www/nextcloud/ and your domain to access Nextcloud is https://home.mecallie.com, then placing the file “foo.jpg” in /var/www/nextcloud/ would make the file accessible via https://home.mecallie.com/foo.jpg.

Nothing placed in /var/www/letsencrypt/ will be accessible via https://home.mecallie.com, because your Nextcloud domain doesn’t use /var/www/letsencrypt/, it uses /var/www/nextcloud/. That is, unless you’ve changed all the defaults for some unexplained reason. It’s hard to know for sure, because you haven’t told us where your Nextcloud instance is or how you’ve configured nginx. I guess it’s possible that you’ve installed Nextcloud into a directory labelled letsencrypt, but it would be a very bizarre thing to do.

Again, you’re talking about the wrong config. You keep talking about Nextcloud and Let’s Encrypt configurations, when the issue is obviously with your nginx configuration.

Also, you’re using words like “key” incorrectly. The public and private keys generated by Let’s Encrypt are stored in /etc/letsencrypt. The private key is never exposed to the web, and the public key is part of the TLS handshake of HTTPS. What you’re calling a “key” is a temporary file or token containing a random string, and it’s exposed on the root domain to demonstrate certbot has admin access to the website. The certbot documentation never calls it a “key”, because that term has a very specific meaning in the context of obtaining TLS certificates.

Only if the root of your Nextcloud domain points to /var/www/letsencrypt/. Installing Nextcloud in a directory labelled “letsencrypt” is wildly inappropriate, and is a sign you’ll have future errors (which you are now experiencing).

Now secondarily, even if Let’s Encrypt is able to renew the certificate via webroot on your current configuration, that doesn’t mean your website is actually using the updated certificate. Again, you’ve provided no information on how your domain or nginx instance is configured, so it’s impossible to say what’s wrong.

But I can say, it looks like you’ve changed every default location and every useful name (e.g. you’ve even separated your Nextcloud data from your Nextcloud site structure). That’s not a good idea, because it leaves you open to inconsistencies and errors (which you now have) and makes it ridiculously difficult to troubleshoot when things go wrong. You also don’t have a strong enough understanding of HTTPS, which is adding to your issue.

Then you should have started with your nginx conf and the hosts you’ve configured in it. JuergenAuer even told you to fix your nginx config in the Let’s Encrypt forum. You don’t seem to be open to the help people are trying to give you.

You’ve broken your nginx config. No one in any forum can help you if you won’t show us what you’ve done with it.

That’s at least two forums now that have told you to fix your nginx setup, or at least show us what you’ve done so we can help you fix it.

Thank you for your reply. I won’t go in to all comments seperatly, I don’t think that would be productive.

I agree that the problem probably lies somewhere in nginx, so I will post my setup here. I installed this Nextcloud somewhere in '17/'18 using the Nextcloud and Let’s encrypt setup as suggested at that time. Nothing fancy. I have had my setup running since then and never changed any config file manually after the initial setup.


user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
use epoll;
http {
server_names_hash_bucket_size 64;
upstream php-handler {
server unix:/run/php/php7.2-fpm.sock;
include /etc/nginx/mime.types;
include /etc/nginx/proxy.conf;
include /etc/nginx/ssl.conf;
include /etc/nginx/header.conf;
include /etc/nginx/optimization.conf;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;
access_log /var/log/nginx/access.log main;
add_header X-Frame-Options "SAMEORIGIN" always;
sendfile on;
send_timeout 3600;
tcp_nopush on;
tcp_nodelay on;
open_file_cache max=500 inactive=10m;
open_file_cache_errors on;
keepalive_timeout 65;
reset_timedout_connection on;
server_tokens off;
# resolver IP is your Router-IP (e.g. your FritzBox)
resolver_timeout 10s;
include /etc/nginx/conf.d/*.conf;

Files in conf.d:

server {
if ($host = home.mecallie.com) {
return 301 https://$host$request_uri;
} # managed by Certbot

server_name home.mecallie.com;
#Your DDNS adress, (e.g. from desec.io or no-ip.com)
listen 80 default_server;
location ^~ /.well-known/acme-challenge {
proxy_set_header Host $host;
location / {
return 301 https://$host$request_uri;

server {
server_name home.mecallie.com;
listen 443 ssl http2 default_server;
root /var/www/nextcloud/;
access_log /var/log/nginx/nextcloud.access.log main;
error_log /var/log/nginx/nextcloud.error.log warn;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
client_max_body_size 10240M;
location / {
rewrite ^ /index.php;
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
location ~ ^/(?:.|autotest|occ|issue|indie|db_|console) {
deny all;
location ~ .(?:flv|mp4|mov|m4a)$ {
mp4_buffer_size 100m;
mp4_max_buffer_size 1024m;
fastcgi_split_path_info ^(.+.php)(/.)$;
try_files $fastcgi_script_name =404;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+).php(?:$|/) {
fastcgi_split_path_info ^(.+.php)(/.
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
location ~ ^/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
location ~ .(?:css|js|woff|svg|gif|png|html|ttf|ico|jpg|jpeg)$ {
try_files $uri /index.php$uri$is_args$args;
access_log off;
expires 30d;

ssl_certificate /etc/letsencrypt/live/home.mecallie.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/home.mecallie.com/privkey.pem; # managed by Certbot



server {
listen default_server;
charset utf-8;
access_log /var/log/nginx/le.access.log main;
error_log /var/log/nginx/le.error.log warn;
location ^~ /.well-known/acme-challenge/ {
allow all;
default_type “text/plain”;
root /var/www/letsencrypt;

There is also a default.conf which is completely empty in the conf.d directory along with a default.conf.bak which does include data. I have not touched these files myself, so not sure if they are just leftovers from some kind of update mechanism or something…

No one that can help me get rid of the 403 error in NGINX?

Alright, I just found out what caused the issue. Although I do not understand how that worked out. The VM that was running Nextcloud was using a bridged adapter. However, there where no physical interfaces available to bridge to since they are all in use by my firewall/router.

After connecting the Nextcloud VM to the lan vmxnet everything immediatly started working again without errors!

The bizarre thing is that I only found out after I noticed that the network card was disconnected after I rebooted the host. I could not reconnect it, the binding failed. Before that however, I could still reach my site and it was using the correct IP. But for some reason the certificate renewal failed.

So I am still not sure why the problems with the virtual network caused such weird issues and why it was at first still working even though VMware told me that there was no available interface after I edited the VM. But I am sure that this is wat solved the issue. I am guessing some kind of cache in VMware Workstation pro that still bound the VM to the correct interface, eventhough that was now assigned to a different vmxnet.

Just thought I’d still mention it here since it seems so bizarre…