Can't delete group folders: "Error loading message template: Internal Server Error"

Nextcloud version: 16.0.3

3x Debian 9.3 nodes running Kubernetes 1.14.3 cluster

Helm chart: nextcloud-1.6.2, using nextcloud:16.03-apache Docker image

The issue you are facing:
I am unable to delete a group folder. It is empty and was created in the wrong place by mistake. The error message given in the browser is as described in the title: “Error loading message template: Internal Server Error”.

In the nextcloud pod logs, I don’t see any indication that I tried to do anything. In my nginx-ingress-controller log, I see this:
2019/07/26 14:23:06 [error] 12275#12275: *1494275 rewrite or internal redirection cycle while internally redirecting to “/index.php/core/templates/message.html”, client: 192.168.1.34, server: nextcloud.apps.mydomain.com, request: “GET /core/templates/message.html HTTP/2.0”

Is this the first time you’ve seen this error? (Y/N):
Yes

Steps to replicate it:

  1. Create new group folder
  2. Click the delete button

The output of your Nextcloud log in Admin > Logging:
Nothing appears to happen at the time of the attempt. Here’s the last half hour’s worth:

Debug	cron	Finished OCA\LookupServerConnector\BackgroundJobs\RetryJob job with ID 253 in 3 seconds	
2019-07-26T10:30:43-0400
Debug	cron	Finished OC\Settings\BackgroundJobs\VerifyUserData job with ID 252 in 3 seconds	
2019-07-26T10:30:41-0400
Debug	cron	Run OCA\LookupServerConnector\BackgroundJobs\RetryJob job with ID 253	
2019-07-26T10:30:40-0400
Debug	cron	Run OC\Settings\BackgroundJobs\VerifyUserData job with ID 252	
2019-07-26T10:30:38-0400
Debug	cron	Finished OCA\LookupServerConnector\BackgroundJobs\RetryJob job with ID 251 in 3 seconds	
2019-07-26T10:30:09-0400
Debug	cron	Run OCA\LookupServerConnector\BackgroundJobs\RetryJob job with ID 251	
2019-07-26T10:30:06-0400
Debug	cron	Finished OCA\LookupServerConnector\BackgroundJobs\RetryJob job with ID 250 in 3 seconds	
2019-07-26T10:17:23-0400
Debug	cron	Run OCA\LookupServerConnector\BackgroundJobs\RetryJob job with ID 250	
2019-07-26T10:17:20-0400

The output of your config.php file in /path/to/nextcloud (make sure you remove any identifiable information!):

root@nextcloud-76d4f689c4-ghmbh:/var/www/html/config# cat config.php
<?php
$CONFIG = array (
  'htaccess.RewriteBase' => '/',
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'apps_paths' => 
  array (
    0 => 
    array (
      'path' => '/var/www/html/apps',
      'url' => '/apps',
      'writable' => false,
    ),
    1 => 
    array (
      'path' => '/var/www/html/custom_apps',
      'url' => '/custom_apps',
      'writable' => true,
    ),
  ),
  'passwordsalt' => 'garlic',
  'secret' => 'the password is 12345',
  'trusted_domains' => 
  array (
    0 => 'localhost',
    1 => 'nextcloud.apps.mydomain.com',
  ),
  'datadirectory' => '/var/www/html/data',
  'dbtype' => 'mysql',
  'version' => '16.0.3.0',
  'overwrite.cli.url' => 'http://localhost',
  'dbname' => 'nextcloud',
  'installed' => true,
  'instanceid' => 'oc9z27a56dqn',
  'maintenance' => false,
  'dbhost' => 'nextcloud-mariadb',
  'dbuser' => 'nextcloud',
  'dbpassword' => 'WouldntYouLikeToKnow',
  'data-fingerprint' => 'insert hash here',
  'loglevel' => '0',
);

The output of your Apache/nginx/system log in /var/log/____:
Based on the symlink in /var/log/apache2 to /dev/stdout and /dev/stderr, I’m assuming this gets output to the Kubernetes pod log, which I didn’t see any log of my attempt to delete the group folder. I just see AJAX requests for notifications.

The helm chart was set up with this values.yaml:

## Official nextcloud image version
## ref: https://hub.docker.com/r/library/nextcloud/tags/
##
image:
  repository: nextcloud
  tag: 16.0.3-apache
  pullPolicy: IfNotPresent
  # pullSecrets:
  #   - myRegistrKeySecretName

nameOverride: ""
fullnameOverride: ""

# Number of replicas to be deployed
replicaCount: 1

## Allowing use of ingress controllers
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
  enabled: true
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 4G
    kubernetes.io/tls-acme: "true"
    certmanager.k8s.io/cluster-issuer: gitlab-cluster-issuer
    nginx.ingress.kubernetes.io/server-snippet: |-
      server_tokens off;
      proxy_hide_header X-Powered-By;

      rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
      rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
      rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
      location = /.well-known/carddav {
        return 301 $scheme://$host/remote.php/dav;
      }
      location = /.well-known/caldav {
        return 301 $scheme://$host/remote.php/dav;
      }
      location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
      }
      location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
        try_files $uri /index.php$request_uri;
        # Optional: Don't log access to other assets
        access_log off;
      }
      location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
        deny all;
      }
      location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
        deny all;
      }
  tls:
     - secretName: nextcloud-tls
       hosts:
         - nextcloud.apps.mydomain.com

nextcloud:
  host: nextcloud.apps.mydomain.com
  username: admin
  password: LoveSecretSexGod (ever see the Hackers movie? :P)
  update: 0
  datadir: /var/www/html/data
  tableprefix:
  mail:
    enabled: false
    fromAddress: user
    domain: domain.com
    smtp:
      host: domain.com
      secure: ssl
      port: 465
      authtype: LOGIN
      name: user
      password: pass
  # Extra config files created in /var/www/html/config/
  # ref: https://docs.nextcloud.com/server/15/admin_manual/configuration_server/config_sample_php_parameters.html#multiple-config-php-file
  configs: {}

  # For example, to use S3 as primary storage
  # ref: https://docs.nextcloud.com/server/13/admin_manual/configuration_files/primary_storage.html#simple-storage-service-s3
  #
  #  configs:
  #    s3.config.php: |-
  #      <?php
  #      $CONFIG = array (
  #        'objectstore' => array(
  #          'class' => '\\OC\\Files\\ObjectStore\\S3',
  #          'arguments' => array(
  #            'bucket'     => 'my-bucket',
  #            'autocreate' => true,
  #            'key'        => 'xxx',
  #            'secret'     => 'xxx',
  #            'region'     => 'us-east-1',
  #            'use_ssl'    => true
  #          )
  #        )
  #      );

internalDatabase:
  enabled: false
  name: nextcloud

##
## External database configuration
##
externalDatabase:
  enabled: false

  ## Supported database engines: mysql or postgresql
  type: mysql

  ## Database host
  host:

  ## Database user
  user: nextcloud

  ## Database password
  password:

  ## Database name
  database: nextcloud

##
## MariaDB chart configuration
##
mariadb:
  ## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
  enabled: true

  db:
    name: nextcloud
    user: nextcloud
    password: SomeKindaRealGoodPasswordHere


  ## Enable persistence using Persistent Volume Claims
  ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    enabled: true
    accessMode: ReadWriteOnce
    size: 8Gi

redis:
  enabled: false
  usePassword: false

## Cronjob to execute Nextcloud background tasks
## ref: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html#cron-jobs
##
cronjob:
  enabled: true
  # Every 15 minutes
  # Note: Setting this to any any other value than 15 minutes might
  #  cause issues with how nextcloud background jobs are executed
  schedule: "*/15 * * * *"
  annotations: {}
  failedJobsHistoryLimit: 5
  successfulJobsHistoryLimit: 2

service:
  type: ClusterIP
  port: 8080
  loadBalancerIP: nil

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true
  ## nextcloud data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"

  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  # existingClaim:

  accessMode: ReadWriteOnce
  size: 30Gi

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

## Liveness and readiness probe values
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 15
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1
readinessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 15
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1

nodeSelector: {}

tolerations: []

affinity: {}

I provisioned the cluster through kubespray v2.10.4 which installs Kubernetes 1.14.3, then I installed GitLab 12.1 through the cloud native helm chart which comes with nginx-ingress-controller and cert-manager, I created a cluster-wide certificate issuer “gitlab-cluster-issuer” that would handle all Let’s Encrypt fetching through the cert-manager pod that GitLab deployed.

I also tried this on a fresh installation just in case there was a problem caused by my botched restore as per my previous thread. But I can create files and folders, upload, delete files, delete folders I create in my personal space.

In addition, not that I expected this to work if the group folders admin page couldn’t delete it, but trying to delete it from the user interface (i.e. selecting folder, clicking Actions, then Delete) results in this error log:

  Debug   webdav         Sabre\DAV\Exception\Forbidden:  at apps/dav/lib/Connector/Sabre/Directory.php line 314    2019-07-26T14:41:06+00:00  
                                                                                                                                              
                         0. 3rdparty/sabre/dav/lib/DAV/Tree.php line 179                                                                      
                            OCA\DAV\Connector\Sabre\Directory->delete()                                                                       
                         1. 3rdparty/sabre/dav/lib/DAV/CorePlugin.php line 287                                                                
                            Sabre\DAV\Tree->delete("files\/dennisf\/TestNewGroupFolder")                                                      
                         2. <<closure>>                                                                                                       
                            Sabre\DAV\CorePlugin->httpDelete(Sabre\HTTP\Request {}, Sabre\HTTP\Response {})                                   
                         3. 3rdparty/sabre/event/lib/EventEmitterTrait.php line 105                                                           
                            call_user_func_array([], [])                                                                                      
                         4. 3rdparty/sabre/dav/lib/DAV/Server.php line 479                                                                    
                            Sabre\Event\EventEmitter->emit("method:DELETE", [])                                                               
                         5. 3rdparty/sabre/dav/lib/DAV/Server.php line 254                                                                    
                            Sabre\DAV\Server->invokeMethod(Sabre\HTTP\Request {}, Sabre\HTTP\Response {})                                     
                         6. apps/dav/lib/Server.php line 316                                                                                  
                            Sabre\DAV\Server->exec()                                                                                          
                         7. apps/dav/appinfo/v2/remote.php line 35                                                                            
                            OCA\DAV\Server->exec()                                                                                            
                         8. remote.php line 163                                                                                               
                            require_once("\/var\/www\/html\/apps\/dav\/appinfo\/v2\/remote.php")