How to migrate from existing Ubuntu 16.04 installation to docker?




I have successfully installed Nextcloud 15 on Ubuntu server 16.04 from scratch.

It works fine since a few weeks but I think docker is better for maintenance and I would like to migrate my existing installation to docker.

As I’m new to docker I would get some advices before trying to achieve this. I have already created some singularity containers to help students and I’m quite familiar with Linux administration but I don’t know docker…

I would like to migrate my whole installation to a docker container (Ubuntu, Apache, Php, MySQL and Nextcloud) but I don’t know where to start for my particular config… even if I had read this :

I’ll try to give you as much informations as I can about my installation but feel free to ask more if something is missing.

First you have to know that there is no data physically on the server, everything is stored in non-Amazon s3 containers, including the primary storage.

My Linux distro is Ubuntu server 16.04 with the following packages :
Apache/2.4.37 (Ubuntu)
MySQL 5.7.25-0ubuntu0.16.04.2 - Ubuntu)
PHP 7.0.32-0ubuntu0.16.04.1

I use HTTPS2
I use a let’s encrypt certificate for nextcloud.mydomain.tld

Here is my Nextcloud config (only the beginning but I can post the whole file if you think it could be relevant) :

    "system": {
        "objectstore": {
            "class": "OC\\Files\\ObjectStore\\S3",
            "arguments": {
                "bucket": "nextcloud-primary",
                "autocreate": true,
                "key": "xxxxx",
                "secret": "xxxxx",
                "hostname": "storage.provider.tld",
                "use_ssl": true,
                "use_path_style": true
        "instanceid": "***REMOVED SENSITIVE VALUE***",
        "passwordsalt": "***REMOVED SENSITIVE VALUE***",
        "secret": "***REMOVED SENSITIVE VALUE***",
        "trusted_domains": [
        "datadirectory": "\/var\/www\/nextcloud\/data",
        "dbtype": "mysql",
        "version": "",
        "dbname": "***REMOVED SENSITIVE VALUE***",
        "dbhost": "***REMOVED SENSITIVE VALUE***",
        "dbport": "",
        "dbtableprefix": "oc_",
        "mysql.utf8mb4": true,
        "dbuser": "***REMOVED SENSITIVE VALUE***",
        "dbpassword": "***REMOVED SENSITIVE VALUE***",
        "installed": true,
        "memcache.local": "\\OC\\Memcache\\APCu",
        "memcache.locking": "\\OC\\Memcache\\Redis",
        "redis": {
            "host": "***REMOVED SENSITIVE VALUE***",
            "port": 6379

And this is my apache vhost

<IfModule mod_ssl.c>
        <VirtualHost *:443>
                ServerAdmin me@mydomain.tld
                ServerName nextcloud.mydomain.tld
                DocumentRoot /var/www/nextcloud
                # HTTP2
                Protocols h2 h2c http/1.1

                Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
                Header always set Referrer-Policy "no-referrer"

                ErrorLog ${APACHE_LOG_DIR}/error.log
                CustomLog ${APACHE_LOG_DIR}/access.log combined

                        <Directory /var/www/nextcloud/>
                        Options +FollowSymlinks
                        AllowOverride All

                        <IfModule mod_dav.c>
                                Dav off

                        SetEnv HOME /var/www/nextcloud
                        SetEnv HTTP_HOME /var/www/nextcloud


                SSLCertificateFile /etc/letsencrypt/live/nextcloud.mydomain.tld/fullchain.pem
                SSLCertificateKeyFile /etc/letsencrypt/live/nextcloud.mydomain.tld/privkey.pem

                SSLEngine on
                SSLProtocol all -TLSv1 -TLSv1.1 -SSLv2 -SSLv3
                SSLHonorCipherOrder on
                SSLCompression off
                SSLOptions +StrictRequire


Any help would be appreciated.



ähm. all in one container? no. that’s possible. but not docker style.
each server/service in one container. web-server, php, database and nextcloud = 4 container. if one image (service) get’s an update it will be replaced and you don’t need to worry about the other services.

docker tutorial.
and then

that will give you a running setup. and you can learn how docker is working.
if you want a web-gui for docker ( do the following:

git clone
cd nextcloud_on_docker
git checkout portainer
vim inventory
ansible-playbook nextdocker.yml

the branch portainer was created today. i wnat to test it a bit more. that’s why i didn’t merge it yet.

i would make a copy of the data to a second bucket. like a backup.
have a look at . could be helpful.
and just copy the objectstore definition to the new config.php

let me know if it’s working and i will intergrate it in my playbook. :wink:

leave this behind. welcome to dockerland. it will run on centos, debian, ubuntu, … but you won’t care.

you need:
a database dump
the nextcloud data folder (for testing always a copy!)
some parts of the config.php

in my setup traefik will handle this for you. just set the nextcloud_server_fqdn variable in the inventory.
best is to setup a nexttest.yourdomain.tld machine. for testing.

in the nextcloud data folder you’ll a folder appdata_REMOVED SENSITIVE VALUE
if you copy the data files from your nc installation to the new one make sure that the ID in the folder name is the same as in the config.php

this will import your mysql dump into the container mysql db:

docker exec -i nextcloud-db mysql -unextcloud -p<password> nextcloud < data.sql

unless you changed db name and/or user in the inventory.


you may also have a look at this page

a cool collection of docker compose files. nextcloud is just one of them.
if you want to extend your service a bit…


Thank you for all this informations, I’m little bit more aware of Docker usage now.

Before you answered my post I have successfully setup a Nextcloud instance with docker-compose on a nexttest.yourdomain.tld machine for testing using this :

What do you think about that ? What is the difference and the pros and cons compared to your solution ?

  • the example is using docker volumes instead of directories in the hosts filesystem.

  • my playbook installs adminer. if you need a tool to access your database.

  • my playbook installs portainer. if you need a web gui for docker.

  • my playbook uses traefik as a reverse proxy handling letsencrypt and all security settings.

  • my playbook can remove all containers (-e state=absent) and reinstall.

  • if you want another application you just copy one of the files in dockercontainer and editthe main.yml there. (imho the ansible stuff is pretty simple to understand.)

  • my playbook installs docker and setup the os.

  • ready to login your nextcloud in 15-20 minutes. after launch of the server.

  • using cloud-init in the folder cloud-stuff you launch hundreds of nextclouds within minutes.


I have read your README and it’s seems very cool, I will try. Anyway some things remain unclear for the docker’s newbie I am.

Yes, in fact that’s what I meant…

Yes, I’m already using this software and I never do anything before making a backup. Backup is life.

How to do that ? How can I modify a file into the Nextcloud container ? And how this modification will be persistent ? See related questions at the end of the post.

You’ll be the first to know :wink:

Yes, that’s why I want to migrate to docker.

Again, how do I make a copy ? What is the command line ?

I like it when there is a command line, this step is very clear.

Well, here are the pending questions for me when using your containers :

How do I backup the containers (containers themselves, data, database, webserver config, certificates, etc…)
How do I start/stop the containers ?
How can I do to start the containers at server’s boot ?
How to “update” the containers without losing data ?
How do I restore everything if I need to reinstall the server ?
I use php-fpm, http2, APCu and Redis. Is it the case with your containers ?

Could you explain ?

Thanks for your patience :no_mouth:


Ok. This will answer some of your questions.

Run the playbook.
Browse the folder

sudo ls -l /opt/nextcloud
sudo docker ps
sudo docker ps
ansible-playbook nextdocker.yml -e state=absent
docker ps
ls -l /opt/nextcloud
ansible-playbook nextdocker.yml
sudo docker ps
sudo vi /opt/nextcloud/config/config.php

Backup/restore will be answered later.


Thanks, that’s more clear for me now and everything works fine.

I will try to import the data directory, the database and the config file tomorrow.


the backup script would look like this


export RESTIC_REPOSITORY="<restic-repo>"
export RESTIC_PASSWORD="<restic-passwd>"

# abort entire script if any command fails
set -e

# Make sure nextcloud is enabled when we are done
trap "sudo -u www-data nextcloud php occ maintenance:mode --off" EXIT

# set nextcloud to maintenance mode
sudo -u www-data nextcloud php occ maintenance:mode --on

# backup the database
sudo docker exec -t nextcloud-db mysqldump --single-transaction -h localhost -u nextcloud -p{{ nc_db_passwd }} nextcloud > /opt/nextcloud/database_dump/db_postgres_nextcloud.sql

# backup the data dir
/usr/local/bin/restic backup /opt/nextcloud --exclude /opt/nextcloud/database 

# turn maintenance mode off
sudo -u www-data nextcloud php occ maintenance:mode --off

# delete trap
trap "" EXIT

# clean up backup dir
/usr/local/bin/restic forget --keep-daily 7 --keep-weekly 5 --keep-monthly 12 --keep-yearly 75

you have to edit the two restic parameter at the beginning and the database password.

and - if you want - you have make a backup of the s3 bucket. either your provider offers something equivalent to aws or it can be done with rclone.

restore a single file: i don’t know. i have no experience with s3 as primary storage.
look here and here

restore everything:
run the playbook.
sudo -u www-data nextcloud php occ maintenance:mode --on

restic restore /opt/nextcloud # look at the restic homepage howto do this
import the database dump

well. you’ll do this when you migrate to docker. so you’ll know if it’s working.


Are you talking about ‘/opt/nextcloud/www/data’ or ‘/opt/nextcloud/data’ ?

This folder doesn’t exist in my Nextcloud source data folder, maybe because I’m using a s3 bucket as primary storage ?

ls -l /var/www/nextcloud/data/
total 12468
-rw-r--r-- 1 www-data www-data        0 janv. 26 21:41 index.html
-rw-r----- 1 www-data www-data 12729189 févr.  3 18:40 nextcloud.log
-rw-r--r-- 1 www-data www-data    28150 janv. 26 21:40 updater.log
drwxr-xr-x 4 www-data www-data     4096 janv. 26 21:41 updater-xxxxxxxxx

Assuming ‘/opt/nextcloud/data’ is the folder you’re talking about I’ll try to copy the content of my Nextcloud source data folder to ‘/opt/nextcloud/data/’ but how to keep or set the user and group to 82 after the copy ?
Because as you can see the content of ‘/opt/nextcloud/data/’ belongs to a user and group 82 which seems to be the equivalent of www-data for the container but this user is not a user of the unix host so I do I set 82 as owner ?

ls -l /opt/nextcloud/
total 24
drwxr-xr-x  2     82     82 4096 Feb  3 18:12 config
drwxrwx---  5     82     82 4096 Feb  3 18:15 data
drwx------  5 mysql  mysql  4096 Feb  3 18:11 database
drwx------  2 myuser myuser 4096 Feb  3 18:17 secrets
drwxr-x---  2 root   myuser 4096 Feb  3 18:10 traefik
drwxr-xr-x 15     82 root   4096 Feb  3 18:12 www

Are you talking about '/opt/nextcloud/www/config/config.php ’ or ‘/opt/nextcloud/config/config.php’ ?
Again how to set owner to 82 for this file ?

Another question : Should I stop the containers before copying files ?


/opt/nextcloud/config/config.php is mapped to the container /var/www/html/config/config.php
so it is the config.php nextcloud is using.

/opt/nextcloud/www/config is empty on my machine.

chown 82 will change the ownership of the files. (it should be www-data. i have to correct this in a future version of the playbook. you do this with useradd --disabled-login --no-create-home --system --uid 82 www-data as well.)

i would stop the nextcloud container, edit the config.php and import the database dump. and restart the container. just rerun my playbook to do so. or try using portainer.

the mapping of the host directories to container ones you’ll find here:

and the host path definitions here:


I would use composer (installed through python-pip), with an nginx-proxy setup. To me it is the easiest way to go. Create the core installation and then modify to access your S3 storage as primary.

I have offered free live help here but no one seems to be interested or are scared of a scammer, go figure. But if you’d like to reach me, I am currently online (Until 7PM CST).