HowTo: Change / Move data directory after installation

Hello

I am running a Raspberry 4 with an USB disk which I want to replace. On the disk is only the data not the database. I am running the newest version of NextcloudPi.
So what I did:

    rsync -rav /old-disk /new-disk
    service apache2 stop
    edited config.php 'datadirectory'
    unplug old disk    
    sudo -u www-data php /var/www/nextcloud/occ files:scan --all
    service apache2 start

Now I have a big problem:

  • if I upload a small text file, eveything is ok.
  • But if I try to upload a bigger file like 1MB picture, I get an error. All bigger uploads are broken. Fron the mobile app, from the desktop client, from the browser.
  • If I plug in the old disk, everything (upload of bigger files) works again.
  • The error from the server log is very cryptic and very long. The client says “connection closed”.

I did a file search and found the old paths in these files:
/etc/php/7.3/fpm/conf.d/10-opcache.ini
/etc/php/7.3/fpm/conf.d/10-opcache.ini
/etc/php/7.3/fpm/php.ini
/etc/php/7.3/fpm/php.ini
/etc/php/7.3/mods-available/opcache.ini
/etc/php/7.3/cli/conf.d/10-opcache.ini
/etc/php/7.3/cli/php.ini
/etc/fail2ban/jail.local
/etc/fail2ban/jail.conf
I replaced the path in there files, restarted apache, but no change. The old disk is still needed.

Has somebody a tip?

Thanks very much
Gilbert

EDIT:
I solved the problem by myself. Like always with Nextcloud and this forum. If somebody has a similar question/problem, how to replace the usb data disk on a raspberry pi, this is what I have done:

date; rsync -rav /media/myCloudDrive/ncdata/ /media/myCloudDriveNew/ncdata/; date;
service apache2 stop
sudo btrfs filesystem label '/media/myCloudDrive/' 'myCloudDriveOld'
sudo restart
service apache2 stop
sudo btrfs filesystem label '/media/myCloudDriveNew/' 'myCloudDrive'
sudo restart
unplug the old small usb disk

My disks are formatted with btrfs. If you have another file system, you have to google for the command for renaming the label.

Good luck!

Thanks for this guide - very helpful. For me, symlink didn’t work no matter how I tried. On Ubuntu server 20.04 LTS (raspberry pi version) and 20.0.5 nextcloud.

What did work was mount --bind, from this post: [Solved] Nextcloud, change Data folder location / Newbie Corner / Arch Linux Forums

Took a long time to finally get there but I did.

Thanks for the feedback. Probably the file system does not support symlinks, like FAT-family or NTFS (without ntfs-3g driver)? The bind mount is a good alternative in such case, but it means that an additional fstab entry or mount call is required while a symlink works on it’s own.

Interesting… The target is on a ext4-formatted LUKS drive while the original/default place is on an ext4-formatted SSD on a raspberry pi. I honestly don’t know what was wrong. Yes it will be an additional command each time I reboot. :expressionless:

Strange indeed then. Does the symlink creation produce any error and does it show up correctly in the file system? E.g.:

ln -s /mnt/external/ncdata /var/www/nextcloud/testlink
ls -l /var/www/nextcloud/testlink

If you use Apache2, is Options +FollowSymLinks allowed for the Nextcloud web dir? Although if not, also rewrites would be broken.

Just my two cents: I was struggeling with the error message as well about the .ocdata file not found.
Based on this topic and the comment by @nachoparker about checking the output of ls -la of the /media directory, I found my problem. The owner of the directory and group settings were blocking access to the directory for the www user.
So in my case, the /media/ needed additional access rights, so www user can access this directory and the subfolder the nextcloud data folder is placed.

i don’t understand step 6 and 7
6. mysqldump -u -p > /path/to/dbdump/dump.sql
7. Adjust "oc_storages"database table to reflect the new data folder location:
dbuser=$(awk -F’ “/‘dbuser’/{print $4;exit}” /path/to/nextcloud/config/config.php)
dbpassword=$(awk -F’ “/‘dbpassword’/{print $4;exit}” /var/www/nextcloud/config/config.php)
mysql -u$dbuser -p$dbpassword
// Inside the MySQL console:
use ;
update oc_storages set id=‘local::/new/path/to/data/’ where id=‘local::/path/to/data/’; //take care about backslash at the end of path!!
quit;
// Again outside the MySQL console
unset dbuser dbpassword

please help, it hard to follow for me (newbie). i don’t know path /path/to/dbdump/dump.sql

You can use any path you think is good for a backup. When unsure use your current users home directory:

mysqldump -uroot -p > ~/dump.sql

Step 7 is mostly copy&paste. Only replace <nextclouddb> with the database name, usually it’s simply nextcloud. Ah this can be found in the config.php as well, aside of database user and password. And replace the old and new data path of course.

Is this topic still relevant?
I’m on Nextcloud 21 and found this topic after already moving the datadirectory by simply moving it and editing the path in config.php. It seems to work fine (not shure about shares, I don’t think I had any active ones). Also, filepaths in the oc_filecache sql table seem to be relative to something (files/…). In fact, searching the database dump for the old location gets me only one entry which is the one in oc_storages. Besides that, latest documentation for “Migrating to a different server” states “If you change any paths, make sure to adapt the paths in the Nextcloud config.php file”.
https://docs.nextcloud.com/server/latest/admin_manual/maintenance/migrating.html
Therefore I suppose it has changed at some point and moving the data directory is supported now? Or is that a wrong conclusion?

Please see the discussion above that it indeed works without changing the database, but wasn’t advised to rely on by a NC engineer: HowTo: Change / Move data directory after installation - #15 by MichaIng

Sad to see that the migration docs are still inconsistent with this. If someone finds time, would be good to send a PR, updating this page to include updating the database storage entry. The bonus of this is that we may have some more NC devs discussing the question :wink: .

1 Like

does anybody knows if i can change the database after moving the datadirectory?
Last week i changed the datadirectory and everything seems to be fine but afterwards i realized that the shares are not working anymore…
my other solution would be to write a script that goes in the database to change the new fileids to the old ones… but dont know if this would work…

Strange that (only?) the shares were lost. I wonder if some metadata is associated to the storage index as well. So Nextcloud adds the required new storage to the database by itself, but it is a new row while the old one with related index remains (invalid).

Did you do any changes to the files/shares etc since you moved the data? As when we change the storages table now, while old shares may be recovered, new shares may be lost. The new storages entry would need to be removed to avoid a duplication. The following query will give an overview:

select * from nextcloud.oc_storages;

yes i think that metadatas are affected as well…

files have been changed since then… but we have not created any new shares… and if anyone created new shares it is not a problem if they are lost afterwards…

in oc_storages i have the old storages at the top /nextclouddata and three new storages at the bottom /mnt/nextclouddata/nextclouddata & /mnt/nextclouddata/nextcouddata (typo…) & /mnt/nextclouddata

our new datadirectory is /mnt/nextclouddata/nextclouddata

Okay (after enabling maintenance mode, of course!), first you need to remove the new row, to avoid a duplicate, then replace the old entry:

remove from nextcloud.oc_storages where id='local::/mnt/nextclouddata/nextclouddata/';
update nextcloud.oc_storages set id='local::/mnt/nextclouddata/nextclouddata/' where id='local::/nextclouddata/';

And of course you can remove the additional two entries, if those are not used (anymore):

remove from nextcloud.oc_storages where id='local::/mnt/nextclouddata/nextcouddata/';
remove from nextcloud.oc_storages where id='local::/mnt/nextclouddata/';

Btw, every query should have an output as result, indicating that 1 row was affected, so you know it worked as expected.

Let’s see if that helps. Would be a strong argument then to fix the official migration documentation, or otherwise having a review into the code whether shares should be valid for a specific storage entry only or not.

1 Like

Maybe I just don’t get it and this might be some newb issue, so please correct me. But there are like tons of guides how to move data directory after installation. And there is always mentioned it would be a great idea to choose a suitable location during installation…But there is no reachable information about how to configure a non default data directory during installation…Think this is the largest issue in the nextcloud documentation.

It is one of the things you can select when you connect to the web interface the first time, where you choose the admin account name, enter the database credentials etc. Should be hard to overlook, actually :grinning_face_with_smiling_eyes: .

1 Like

looking through the official admin guide it doesn’t take ages to find the screenshot I referenced in this post

depending o your (automated) installation method you might not hit this screen - but I bet all of the (good) installation guides point you to the setting where you can change the data directory…

1 Like

Thank you @MichaIng for this thread, it’s incredibly helpful.

I am currently trying to implement Solution 1, but the copying is taking forever and I wanted to ask if that’s normal.

I run NC21 in a Linux Ubuntu VM (VMWare Fusion) inside a powerful Mac Pro. I am trying to move my nextcloud-data directory from /var/www/nextcloud-data which is on my limited hdd space on that vm to a newly created share on my Synology NAS in my LAN which I mounted in /mnt/nextcloud-data. I also edited /etc/fstab for automount after reboot.

The problem that I am seeing is that when I get to step 2 cp -a /var/www/nextcloud-data/. /mnt/nextcloud-data it takes forever to process; like many many hours.

So I cancelled the cp process by doing ctrl-c and tried copying the files instead via the Files app in Ubuntu GUI, and noticed it just wants to take an ENORMOUS amount of time to copy and paste the files into the newly mounted folder.

In the command line, after I type the cp -a command, it doesn’t give me any idea of how much longer it will take, whereas in the Files app in Ubuntu I can see how much longer is left (many many hours). Nevertheless I don’t know how/if I can replicate the cp -a option when copying via the Files app gui. Is it possible to do that?

Do you know if there’s another faster method of correctly copying the files over? Maybe tar? I wouldn’t know how to replicate the cp -a option in tar, if it’s even possible.

Any help is greatly appreciated.

Thanks,

Of course it depends on the storage read/write speeds, the amount of data and when copying to the network drive, the limiting factor most likely is network transfer speed. So yes, I’d say it is pretty much expected. You can test whether its faster when copying things onto a physical drive and form there to the NAS.

Compression most likely does not speed up the whole process as then the compression takes much time and the decompression at the target. If you deal with large amount of data, things will take their time, but luckily you don’t need to keep attending the copy process, just leave it in background and check back once in a while.

That’s the thing that strikes really odd. I have Gigabit LAN and the Mac Pro running the VM is connected via ethernet cable to the router, which itself is connected via ethernet cable to the Synology NAS, so I should be maxing out my transfer speed which should be limited by the Mac Pro’s SSD read speed (fast), and the NAS’s write speed (around 120 MB/s or 960 Mbps). In real life I am able to transfer from the Ubuntu VM to the NAS at around ~45 MB/s.

Instead the transfer speeds I am seeing are less than 1 Mbps, so in the Kbps range which is pretty ridiculous.