Tutorial: How to migrate mass data to a new NextCloud server


Fab, I read through your tutorial and i was relieved to see it was possible to migrate mass data onto my NextClould quickly. Thank You.

I followed your tutorial and i can see that the (test) file i attempted to move was in fact relocated to the destination.
But, when i ran
sudo -u www-data php console.php files:scan --all
command in " /var/www/nextcloud " an error was returned which says:

Your data directory is invalid

Ensure there is a file called ".ocdata" in the root of the data directory.

Your data directory is readable by other users

Please change the permissions to 0770 so that the directory cannot be listed by other users.

An unhandled exception has been thrown:

Exception: Environment not properly prepared. in /var/www/nextcloud/lib/private/Console/Application.php:148

Stack trace:

#0 /var/www/nextcloud/console.php(89): OC\Console\Application->loadCommands(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))

Also, i can’t even access any of my user accounts . i’m getting an error that says: "
" Your data directory is invalid
Ensure there is a file called “.ocdata” in the root of the data directory. "

In my case, I didn’t realize that the files needed to be inside of the users “./files” folder /data/username/files and I was placing them in the /data/username folder. The scan indexed the files, but they didn’t show up in the web interface. Moving them to the correct sub-directory and rescanning fixed it.


Hi. How can I migrate when Nexcloud using S3 has external storage. Because… the directory structura doesent exist.

For example… I have directory structure with files (/tmp/user1 for example. 1GB of data distributed into 100 files) and I need upload the directory structure and its files into an spesific user account. But… I not known the password of the user. Im know the admin user and password.

How can I do?

My regards.

Would be nice to do something like this from one NextCloud iocage to another NextCloud iocage. This is cause my instance was done from third-party install and some features don’t work.

My db and files folders are located outside of the iocage/jail. Theoretically, if it’s safe to map the files folder without the data being lost, that should be as simple as editing the files location, but I’ve yet to get any replies on the subject… aside from “don’t knows”. There are no comments or contacts, etc. There are some notes that would be easily copied over.

Thanks in advance!


I am new to the community and a question regarding this post.
I have a one node google’s Ganeti cluster with multiple VM’s, nextcloud and the backup VM’s are the only VM’s with external data storage. The Ganeti master /host) and all VM’s run Ubutuntu 16.04.

  • For nextcloud I have a raid 5 attached over nfs from the host (Ganeti) to the guest (nextcloud)
  • I am using samba 4 AD to manage user’s accounts
  • the user’s folder in the data directory is the UUID of the users

Now I want to move from Ubuntu to debian for the Ganeti master (host) as well as for the VM’s (guests). That means I have to delete every thing, keep the existing raid and reinstall host and guests.
So when setting up a new samba 4 AD and create the users again, they will have a different UUID.
Is there any way to migrate the data of the existing accounts to the new accounts with the new UUID?

Thanks in advanced

After moving to Debian I decided to move from mysql-5.7 to mariadb-server 10.1.37-0+deb9u1, from the default stretch repo. So I did the following:

  1. installed alll needed packages from default debian pero which are needed for nextcloud
  2. created my nextcloud vhost with the new server name
  3. copied the /var/www/nextcloud/directory to the new server
  4. made an sqldump from the old server and copied it to the new server

The first problem ist that in mariadb-server 10.1.37 the innodb_large_prefix is not enabled which caused me a headache with some tables while importing mysqldump. While it is enabled in mysql-server 5.7. So I did the following:

# mysql -u root -p
MariaDB [(none)]> SET GLOBAL innodb_file_format=Barracuda;
MariaDB [(none)]> SET GLOBAL innodb_file_per_table=ON;
MariaDB [(none)]> SET GLOBAL innodb_large_prefix=1;

the log out from mysql. Next I created the nextcloud database and imported the mysqldump and upgraded:

mysql -u root -p -e "CREATE DATABASE nextcloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci"
mysql -u root -p nextcloud < /tmp/nextcloud.sql
mysql_upgrade -u root -p --force

then I added the certificate for apache ssl vhost and LDAPS and restarted apache2 and mysql.

The last step was to add the new IP of the server to the hosts /etc/exports and to auto mount the share in /etc/fstab in the new nextcloud server.

And there you go, everything is there and works perfectly.


if the new nextcloud server has a different host name, you have to do the following:

  • replace the server name in the apache vhost

  • replace the server name in /var/www/nextcloud/config/config.php

  • replace all appearances of the old server name with the new one on nextcloud.sql dump before importing it to the database:

    cp nextcloud.slq nextcloud-org.sql
    sed -i 's/old\.cloud\.server/new.\.cloud\.server/g' nextcloud.sql 



If you are using mariadb fro debian buster which is mariadb-server 1:10.3.13-1 , the innodb_large_prefixis reenabled due compatibility. I didn’t tested it yet though.

1 Like

Everything is working well except that the scan doesn’t result in folders or files that are seen by the nextcloud client. The output is the same as above (ie the small table) and the files are in the correct folder (on an external local disk) but just cannot be seen by the nextcloud client.

If I create a file or folder in the folder that they’re in on the nextcloud client then these are seen both by the client and by cd’ng into the folder on the server.

I assume that this is a permission issue? If so what permissions should I assign?

I think it would be a case of reassigning the correct permissions:
sudo chown -R www-data:www-data /path/to/nextcloud/data/
In pure layman’s terms, the process in total seems to be: move the files, change their ownership, run the php console.
At least, that’s what worked for me!

Hey, i know this is an old thread however i’m having an issue. I have migrated successfully however forcing Nextcloud to search for the files is having a mental breakdown.

The following i am getting in my terminal and have no idea what to do as the folder its stating missing is actually there.

Command - root@freenas:/mnt/NAS/iocage/jails/Nextcloud/root/usr/local/www/nextcloud # php console.php files:scan data

Error - App directory “/usr/local/www/nextcloud/apps” not found! Please put the Nextcloud apps folder in the Nextcloud folder or the folder above. You can also configure the location in the config.php file.

I’m running on a FreeNAS, the previous parts in the tutorial worked a charm it’s just this part im having issues with.

Great post helps tons. I am in the middle of switching drives but I have a slightly different setup due to Nextcloudpi, I believe. I will attach a screen shot. not sure what to do with the linked directories.
Thanks in advance.

1 Like

Appreciate this thread is a little old now, but just chiming into say it was helpful, though my Docker container didn’t have sudo installed. The trick was to exec the container as the www-data user:

docker exec -it nextcloud-app --user www-data /bin/bash

then I could run the php console.php files:scan --all command and it scanned the files - no sudo etc needed.

It should be:

sudo -u www-data php occ files:scan --all

I am doing it right now :slight_smile:

Thanks for clear tutorial. Do you know how to do this with snap version of Nextcloud?

defininatly a valuable Tutorial! Thanx

It made my day, as I moved my Synology Drive Data to Nextcloud with ease :slight_smile:
I just skipped the external drive and copied directly into the Folder
not to forget to change ownership the new folder to http:http for synology-users:

chown http:http <new-folder> -R

Hi, this is my first post here.

I followed the tutorial to the dot but cannot get Nextcloud to recognise all the files at /media/win. I can see all the millions of files in the terminal but only get 1678 files scanned when doing the …files:scan -all entry.

I only get this:
| Folders | Files | New | Updated | Removed | Errors | Elapsed time |
| 1826 | 1678 | 0 | 0 | 0 | 0 | 00:00:09 |

Does anyone have any inputs as to why this happens? Cheers.

EDIT: I installed mine on Ubuntu Server 22.04 with LVM. The LVM setup may actually be alienating the /media/win folder where everything was downloaded on.

1 Like

Hi everyone,

I believe the LVM setup doesn’t allow any mass migration through rsync so what I did was just sync the server files through the Nextcloud app. It’s all good now.


Elefo TM

EDIT: rsync works with transferring large amounts of folders/files to /media/win even with the LVM setup but it’s getting Nextcloud to recognise the added files though sudo -u www-data php console.php files:scan --all and derivative syntaxes.

1 Like

Please excuse my naivety but I cannot deduce why LVM would establish any restriction for an applicable use of the rsync command. Would you like to enlighten us?

Welcome and nice to learn of your insights.


1 Like

Apologies. I should have clarified but the rsync command worked flawlessly, as usual. It’s the

sudo -u www-data php console.php files:scan --all

and derivatives which cannot read anything I have transferred under


I tried different syntax as well but to no avail. It’s not a problem with non-LVM setups, though. No matter the syntax, I can’t get nextcloud to rescan the added files. It’s just probably one thing to note with LVM setups. Cheers.