Nextcloud - Kubernetes - Redeploying not working

Nextcloud version (eg, 12.0.2): 16.0.4
Operating system and version (eg, Ubuntu 17.04): Kubernetes / Docker
Apache or nginx version (eg, Apache 2.4.25): Docker Image nextcloud:16.0.4-apache
PHP version (eg, 7.1): Docker Image nextcloud:16.0.4-apache

The issue you are facing:

  • We are deploying Nextcloud on Kubernetes using the Helm Chart from https://github.com/helm/charts/tree/master/stable/nextcloud .
  • We changed the Docker image to be nextcloud:16.0.4-apache
  • We use the s3.config.php option to store our files on S3.
  • We use the external database option to use a MariaDB server we already have.

We launch Nextcloud the first time and it creates the DB correctly, creates the first user correctly, and starts up as expected. We can login, and create / upload files.
To verify our files are secure and retrievable after a major failure, we re-deploy the Nextcloud deployment (scale to 0, scale to 1).
After this, while starting the logs show the following:

Initializing nextcloud 16.0.4.1 ...
Initializing finished
New nextcloud instance
Installing with MySQL database
starting nextcloud installation
The username is already being used
retrying install...
The username is already being used
retrying install...
The username is already being used
retrying install...

This is the first issue. WHY does it try to re-install? The Database is still there, and so is the previous user. Why does it not just connect and re-use what is there?

After a couple of minutes the container dies and starts again, this time without failure. BUT, when trying to browse to nextcloud, we are greeted with the following message:

Error
It looks like you are trying to reinstall your Nextcloud. However the file CAN_INSTALL is missing from your config directory. Please create the file CAN_INSTALL in your config folder to continue.

If I create the CAN_INSTALL file, I am prompted with the installation/setup screen and am told that the admin account I want to use does already exist.

Is this the first time you’ve seen this error? (Y/N): Y

The output of your config.php file in /path/to/nextcloud (make sure you remove any identifiable information!):

<?php
$CONFIG = array (
  'debug' => true,
  'htaccess.RewriteBase' => '/',
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'apps_paths' => 
  array (
    0 => 
    array (
      'path' => '/var/www/html/apps',
      'url' => '/apps',
      'writable' => false,
    ),
    1 => 
    array (
      'path' => '/var/www/html/custom_apps',
      'url' => '/custom_apps',
      'writable' => true,
    ),
  ),
  'objectstore' => 
  array (
    'class' => '\\OC\\Files\\ObjectStore\\S3',
    'arguments' => 
    array (
      'bucket' => 'nextcloud-files',
      'autocreate' => true,
      'key' => '**************',
      'secret' => '****************',
      'region' => 'eu-west-1',
      'use_ssl' => true,
    ),
  ),
  'passwordsalt' => '********************',
  'secret' => '******************',
  'trusted_domains' => 
  array (
    0 => 'localhost',
  ),
  'datadirectory' => '/var/www/html/data',
  'dbtype' => 'mysql',
  'version' => '16.0.4.1',
  'overwrite.cli.url' => 'http://localhost',
  'dbname' => 'nextcloud',
  'dbhost' => 'mariadb.mariadb',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'mysql.utf8mb4' => true,
  'dbuser' => 'oc_user129',
  'dbpassword' => '*****************',
  'instanceid' => '************',
);

Any idea on how to solve this issue?

Anyone here using Nextcloud on Kubernetes? With S3 Backend and persistent config?

I have the same problem.

I also do.
An option to possibly update an existing user’s to password provided by Helm would seem useful to me here.

Related note: restarting pods or attempting to upgrade helm release leads to non-functional Nextcloud… https://github.com/helm/charts/issues/17093

(My database pods have a different root/replication password than the generated one that’s sitting in my kubernetes secret nextcloud-mariadb)

Is there a more well-supported helm chart that allows for things like restarts and upgrades?

Just a quick follow-up. It would appear this is based in part due to Helm regenerating secrets on upgrade if the root and replication passwords are not explicitly specified.

For anyone looking to deploy a new install: make sure you set mariadb.rootUser.password and mariadb.replication.password. Must be 32 characters or less

If using values.yaml, insert this below the mariadb: -> db: section


  rootUser:
    password: GENERATE_A_GOOD_PASSWORD_AND_PUT_IT_HERE # must be 32 characters or less
    forcePassword: true

  replication:
    password: GENERATE_ANOTHER_GOOD_PASSWORD_AND_PUT_IT_HERE # must be 32 characters or less
    forcePassword: true

For more detail and discussion regarding automatically generated secrets, see this Helm issue: https://github.com/helm/charts/issues/5167

2 Likes

Last but not least: recover an old generated root password like so:

helm get nextcloud --revision N | grep password

where N is one of the revision numbers listed in helm history nextcloud

Decode the password:

base64 -d; echo

Paste the password, press enter, and Ctrl+D.

Test the password in the nextcloud mariadb container:

kubectl exec -it nextcloud-mariadb-master-0 bash
mysql -uroot -p

Once the right one is found:

kubectl edit secrets nextcloud-mariadb

Paste the base64 encoded data in place, delete the pods.

kubectl delete po nextcloud-mariadb-master-0 nextcloud-mariadb-slave-0

Apologies if this is hijacking the topic. Figured people might wind up here from a search

Don’t forget to redeploy the chart with --force.
Otherwise next time you need to change anything and redeploy will found helm showing you an error because it found differences between what it is deployed and what it think should be.

It is because is not aware of the manual change you did in the secret and, probably, next time you need to deploy you will not remember what that error is showing up.