Internal server error when trying to login

I have the same issue. Watchtower upgraded the apache docker image and now I cannot login, and my folders get HTTP 503. I also cannot go back to 19.0 because my data is now at 20.0 and downgrade is not supported. My Nextcloud is completely unusable now. Please advise on a fix for this. The docker image was the only change.

Same situation on my docker host and on the host of my friend: Automatic update via Watchtower - now both showing Internal server error.

bump. Any suggestions here to get Nextcloud back online after the update?

Welcome to the Nextcloud community! @redtux @Iceman123 @owenja6

bump. It’s been over a week with no replies from Nextcloud. Please help getting our environments working again. At the moment, we cannot use Nextcloud and it’s plugin features.

Found another issue which provides a workaround: "Could not decrypt key" upon login

I took a look at that, but I have never enabled encryption in my implementation and so the key files and directory structure do not exist:

root@0e2dbc9639c2:/var/www/html# find . -name OC_DEFAULT_MODULE
root@0e2dbc9639c2:/var/www/html#

Have you tried to reset the password for one user? Does the login work after a password reset?
I don’t know your configuration setup, but are you sure /var/www/html is the top tree where nextcloud is storing it’s data (a.k.a. datadirectory contains /var/www/html in the config.php?

Thanks for responding @redtux. I am using the Docker image, so everything is under /var/www/html. My user data is a Docker Volme (CIFS mount) that is mounted on /var/www/html/data in the container. All my users, except admin, are authenticated via Active Directory.

Would you mind running occ log:watch and attempt a login in order to capture what NC thinks what goes wrong? I’m not a member of the NC dev team, but perhaps I can help getting your environment back to a working state.

Sooo…it appears that there is an error running OCC:

$ docker exec --user www-data nextcloud php occ
An unhandled exception has been thrown:
Error: Interface ‘OCA\Files_External\Lib\Config\IBackendProvider’ not found in /var/www/html/custom_apps/files_external_gdrive/lib/AppInfo/Application.php:32
Stack trace:
#0 /var/www/html/lib/composer/composer/ClassLoader.php(444): include()
#1 /var/www/html/lib/composer/composer/ClassLoader.php(322): Composer\Autoload\includeFile(’/var/www/html/c…’)
#2 [internal function]: Composer\Autoload\ClassLoader->loadClass(‘OCA\Files_exter…’)
#3 [internal function]: spl_autoload_call(‘OCA\Files_exter…’)
#4 /var/www/html/lib/private/AppFramework/Bootstrap/Coordinator.php(108): class_exists(‘OCA\Files_exter…’)
#5 /var/www/html/lib/base.php(645): OC\AppFramework\Bootstrap\Coordinator->runRegistration()
#6 /var/www/html/lib/base.php(1092): OC::init()
#7 /var/www/html/console.php(49): require_once(’/var/www/html/l…’)
#8 /var/www/html/occ(11): require_once(’/var/www/html/c…’)

That doesn’t sound good. Does the volume holding the data also contain custom_apps/files_external_gdrive/lib/AppInfo/Application.php? I would expect this to be separated from the user data. Like on my arch linux system, apps are part of /usr/share/webapps/nextcloud tree.

So, the custom_apps tree is under /var/www/html. On Docker, nothing is separated out. This is what is mounted in my Docker Compose file:

    volumes:
  - /data/nextcloud:/var/www/html
  - users:/var/www/html/data

Keep in mind that this has been working perfectly fine for a long time, and have had a score of A+ on securityheaders.com (using a separate Apache2 reverse proxy). The only thing that changed was that the Docker image got upgraded to 20.0, which broke it. Now that the data is at 20.0, I cannot roll back.

It’s weird that it has stopped working. Perhaps with the latest update, a security setting was triggered of which the user isn’t aware. I wish I could help solving the problem but I don’t know how yet. I’ll try to create a somewhat similar setup as you have. If possible, can you send me the docker compose file?

Thanks @redtux! I have sanitized the compose file, so it will need some editing before you test. I have my own build based on the official Nextcloud image.

I have a cron job that runs a script that checks for a new image for the apache tag of the official Nextcloud image. If it has changed, it triggers the build in my repository. My build only adds the file /usr/src/nextcloud/config/redis.config.php (below):

<?php
$CONFIG = array (
  'memcache.locking' => '\OC\Memcache\Redis',
  'redis' => array(
    'host' => 'redis',
    'port' => 6379,
  ),
);

There is also the office container that is the Collabora Core application. That integration with Nextcloud is successful, but it always threw an error when a user tried to open a file in the webui. I have tried the stack without that container and still get the same HTTP 500 error when trying to access Nextcloud.

version: '2'

services:
  db:
    image: mariadb
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    restart: unless-stopped
    container_name: mysql
    volumes:
      - /data/mysql:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=secret
      - MYSQL_PASSWORD=secret
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud
      

  redis:
    image: redis:alpine
    restart: unless-stopped
    container_name: redis
    volumes:
      - data:/data

  app:
    image: otispresley/nextcloud:latest
    restart: unless-stopped
    container_name: nextcloud
    ports:
      - 8080:80
    volumes:
      - /data/nextcloud:/var/www/html
      - users:/var/www/html/data
    environment:
      - MYSQL_HOST=db
      - MYSQL_PASSWORD=secret
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud
    depends_on:
      - db
      - redis

  cron:
    image: otispresley/nextcloud:latest
    restart: unless-stopped
    container_name: cron
    volumes:
      - /data/nextcloud:/var/www/html
    entrypoint: /cron.sh
    depends_on:
      - db
      - redis
      
  office:
    image: collabora/code:latest
    restart: unless-stopped
    container_name: office
    ports:
      - 9980:9980
    environment:
      - domain=office\\.mydomain\\.com
    cap_add:
      - MKNOD
      
volumes:
    users:
      driver: local
      driver_opts:
         type: cifs
         device: //ip_address/Users
         o: username=myuser,password=mypass,file_mode=0770,dir_mode=0770,rw,uid=33,gid=33

A clean install based on docker works without reproducing the problem. If you inspect the //ip_address/Users/.../<user>

directory outside the container, which directories are listed under a ? A clean install only contains

cache
files

In my NC env, (most likely because I’ve enabled settings in the past) do contain a files_encryption directory.

Hi @redtux. Thanks for testing this. In the user directory for the user account I use regularly, there is cache, files, files_trashbin, files_versions, uploads. For a user that has only logged in once in the distant past, there is only cache and files.

I will do some more testing, but I did do a little over the weekend. I started up a new stack with a local volume rather than a CIFS mount and a local volume rather than the bind mounted database path. That reproduced the problem for me. Just running the nextcloud container alone worked and I was able to do initial setup and log in.

I did some testing by creating a new stack with different volumes and mounts and was able to get through the initial setup. There are still a couple challenges around headers in 20.0, and you have to manually create the .ocdata file in the data volume for it to work, but I can work around that. It is just very disappointing that the upgrade path is broken and that it forces me to start all over again from scratch.

I had a similar problem. I only got a blank page after the update, and using the network view in the developer tools noticed I was getting internal server errors.
I came across this thread on reddit that suggested deleting the nextcloud_data/custom_apps/files_external_gdrive/ folder. Twice: After the first time I was able to properly run the update, and a second time after to be able to access the site after the update. But it’s looking to be working fine again, finally.

That’s indeed unfortunate.