Backing up and restoring the Kubernetes/Helm edition

I was rocking the Helm chart stable/nextcloud deployed with MariaDB (I ran the helm install command with my values.yaml and was up and running within a couple minutes <3).

I just had to move my applications to a new Kubernetes cluster.

Here’s how I backed up:

mkdir nextcloud
cd nextcloud
kubectl cp nextcloud-8d49f879b-bkql6:/var/www/html backup/
kubectl exec nextcloud-mariadb-master-0 -- bash -c "mysqldump --single-transaction -u\$MARIADB_USER -p\$MARIADB_PASSWORD --opt nextcloud" > nextcloud.sql

Here’s my values.yaml:

## Official nextcloud image version
## ref: https://hub.docker.com/r/library/nextcloud/tags/
##
image:
  repository: nextcloud
  tag: 16.0.3-apache
  pullPolicy: IfNotPresent
  # pullSecrets:
  #   - myRegistrKeySecretName

nameOverride: ""
fullnameOverride: ""

# Number of replicas to be deployed
replicaCount: 1

## Allowing use of ingress controllers
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
  enabled: true
  annotations:
  #  nginx.ingress.kubernetes.io/proxy-body-size: 4G
  #  kubernetes.io/tls-acme: "true"
  certmanager.k8s.io/cluster-issuer: gitlab-issuer
  #  nginx.ingress.kubernetes.io/server-snippet: |-
  #    server_tokens off;
  #    proxy_hide_header X-Powered-By;

  #    rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
  #    rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
  #    rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
  #    location = /.well-known/carddav {
  #      return 301 $scheme://$host/remote.php/dav;
  #    }
  #    location = /.well-known/caldav {
  #      return 301 $scheme://$host/remote.php/dav;
  #    }
  #    location = /robots.txt {
  #      allow all;
  #      log_not_found off;
  #      access_log off;
  #    }
  #    location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
  #      try_files $uri /index.php$request_uri;
  #      # Optional: Don't log access to other assets
  #      access_log off;
  #    }
  #    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
  #      deny all;
  #    }
  #    location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
  #      deny all;
  #    }
  tls:
     - secretName: nextcloud-tls
       hosts:
         - nextcloud.apps.mydomain.com

nextcloud:
  host: nextcloud.apps.mydomain.com
  username: admin
  password: NothingToSeeHereMoveAlong
  update: 0
  datadir: /var/www/html/data
  tableprefix:
  mail:
    enabled: false
    fromAddress: user
    domain: domain.com
    smtp:
      host: domain.com
      secure: ssl
      port: 465
      authtype: LOGIN
      name: user
      password: pass
  # Extra config files created in /var/www/html/config/
  # ref: https://docs.nextcloud.com/server/15/admin_manual/configuration_server/config_sample_php_parameters.html#multiple-config-php-file
  configs: {}

  # For example, to use S3 as primary storage
  # ref: https://docs.nextcloud.com/server/13/admin_manual/configuration_files/primary_storage.html#simple-storage-service-s3
  #
  #  configs:
  #    s3.config.php: |-
  #      <?php
  #      $CONFIG = array (
  #        'objectstore' => array(
  #          'class' => '\\OC\\Files\\ObjectStore\\S3',
  #          'arguments' => array(
  #            'bucket'     => 'my-bucket',
  #            'autocreate' => true,
  #            'key'        => 'xxx',
  #            'secret'     => 'xxx',
  #            'region'     => 'us-east-1',
  #            'use_ssl'    => true
  #          )
  #        )
  #      );

internalDatabase:
  enabled: false
  name: nextcloud

##
## External database configuration
##
externalDatabase:
  enabled: false

  ## Supported database engines: mysql or postgresql
  type: mysql

  ## Database host
  host:

  ## Database user
  user: nextcloud

  ## Database password
  password:

  ## Database name
  database: nextcloud

##
## MariaDB chart configuration
##
mariadb:
  ## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
  enabled: true

  db:
    name: nextcloud
    user: nextcloud
    password: Wouldn'tYouLikeToKnow


  ## Enable persistence using Persistent Volume Claims
  ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    enabled: true
    accessMode: ReadWriteOnce
    size: 8Gi

redis:
  enabled: false
  usePassword: false

## Cronjob to execute Nextcloud background tasks
## ref: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html#cron-jobs
##
cronjob:
  enabled: true
  # Every 15 minutes
  # Note: Setting this to any any other value than 15 minutes might
  #  cause issues with how nextcloud background jobs are executed
  schedule: "*/15 * * * *"
  annotations: {}
  failedJobsHistoryLimit: 5
  successfulJobsHistoryLimit: 2

service:
  type: ClusterIP
  port: 8080
  loadBalancerIP: nil

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true
  ## nextcloud data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"

  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  # existingClaim:

  accessMode: ReadWriteOnce
  size: 30Gi

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

## Liveness and readiness probe values
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 15
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1
readinessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 15
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1

nodeSelector: {}

tolerations: []

affinity: {}

When I started working on restoring, I deployed with this same values.yaml, but had a hell of a time getting the mariadb pods to stay afloat. Root password was apparently not initialized, it was just blank–I had to do something along the lines of this series of commands to get things back in shape (across several restarts of the container since it kept getting booted by Kubernetes - and I think I also tried the occ maintenance:install command as well, but that was unsuccessful)

kubectl exec -it nextcloud-mariadb-master-0 -- bash
export | grep -i db # to find the database passwords for copy/paste since I couldn't remember how to read environment variables in mysql terminal...
mysql -uroot
use mysql;
CREATE USER 'nextcloud'@'%' IDENTIFIED BY 'password';
CREATE DATABASE nextcloud;
GRANT ALL ON nextcloud.* TO 'nextcloud'@'%';
SET PASSWORD FOR 'root'@'localhost' = PASSWORD('password');
exit # to mariadb container shell
exit # to host shell
kubectl cp nextcloud.sql nextcloud-mariadb-master-0:/tmp/
kubectl exec -it nextcloud-mariadb-master-0 -- bash -c "mysqldump -unextcloud -p\$MARIADB_PASSWORD nextcloud < /tmp/nextcloud.sql"

Then I copied the config/ and data/ dirs over again, and a pod restart or two later, it seems to be up and running. Did I miss a step when trying to restore initially or is this not a well-beaten path? Any tips for future restores, should they be necessary?

One thing I can think of: it was an internal (SQLite) database deployment initially, but I later converted it to MariaDB, if that makes a difference.