Migrating to a different host and updating from 20.x and MySQL 5.7 in one go

I have a 20.x instance which I can’t upgrade in place because it’s not dockerized and the server needs to keep running MySQL 5 for legacy software. So the plan is to migrate that instance to a new host with MySQL (or MariaDB) 10 and do the upgrade there.

So once I have the new instance with a current version of NC up and running, will it automatically migrate a restore of files and database dump as outlined in Migrating to a different server — Nextcloud latest Administration Manual latest documentation ? If not, what ist the recommended procedure within my constraints?

It depends on what approach you take. If your intention is to keep the existing Nextcloud instance and upgrade it, you should get the existing instance running on the new server (not a new instance) and then step through upgrades of NC, PHP, etc. as appropriate.

DO NOT upgrade more than one major version at a time (e.g. NC 20 to NC 22+)

On the other hand, if your intention is to make a NEW Nextcloud server and migrate the FILES to it, then you would not use the old server’s DATABASE.

What you’re describing sounds like a combination of the two which I’m afraid will probably just result in a broken setup.

Here’s what worked for me (almost):

  • pulled mariadb:10.5 and nextcloud:20.0.14-apache images on the new host (since 20 is the version of the old host)
  • started the mariadb container only and loaded the DB dump from the old host
  • rsynced the nextcloud dir from the old host and adjusted the changed DB params in config.php to fit those of the new server’s mariadb settings (as given in the environment vars in my docker-compose.yml), also adjusted datadirectory to the container default /var/www/html/data and added the test hosts domain to the allowed domains array
  • mounted that as a bind mount into the nextcloud container and started it
  • then ran through the version to version updates from the UI all the way up to 24, no issues

The only problem I’m observing is that the security and setup warnings claim that my data directory is invalid, I should check if an .ocdata exists and that it cannot create the data directory. This is strange, because everything else, in particular uploading new files works without any problems.

I also checked the file permissions, apache runs as www-data inside the container and has write access to /var/www/html/data and /var/www/html/data/.ocdata so I don’t understand where this is coming from?

Good, sounds like you’re almost there.

What are the ownership and permissions of the .ocdata file and the data folder?

From a bash inside the NC container:

root@9813a0bac158:/var/www/html/data# ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 15:43 ?        00:00:00 apache2 -DFOREGROUND
www-data      29       1  0 15:43 ?        00:00:01 apache2 -DFOREGROUND
...
root          42       0  0 15:48 pts/0    00:00:00 bash
root          56      42  0 15:51 pts/0    00:00:00 ps -ef
root@9813a0bac158:/var/www/html/data# ls -al
total 147376
drwxrwxrwx 26 www-data www-data      4096 Oct 24 15:49  .
drwxrwxrwx 14 www-data www-data      4096 Jan  4  2022  ..
-rw-rw-r--  1 www-data www-data       542 Nov 22  2021  .htaccess
-rw-rw-rw-  1 www-data www-data         0 Nov 22  2021  .ocdata
...

According to what I see, NC or rather apache should be able to write?

Seems that way. It may be pickier than just having or not having access. How does it compare to the permissions on the old one?

I just checked one of mine, and it has the data folder with drwxrwx--- www-data:www-data

And .ocdata is -rw-r--r-- www-data:www-data

The data folder is now

drwxrwx--- 26 www-data www-data      4096 Oct 24 17:49  .

and .ocdata

-rw-rw-r-- 1 www-data www-data 0 Oct 24 18:07 .ocdata

Same as on the old host, but the error/warning remains. Also tried your -rw-r–r-- on .ocdata, no change

So this is weirdly related to the cron settings. Changing those from cron to webcron lets that (imho misleading message) vanish. This seems to be a known issue, e.g. as described here. Nevertheless, the fix seems to be more of a workaround than a solution…