[NC19] How to move user data, without hickups



Sorry i didn’t know there are more pool options.
What i have done is this:

Does this maybe tell you a bit more?

According to your docu it should be stipped, if you run

zpool list

you should see a bit more info and overall capacity should be around 2.5 TB. Is it?

In any way, according to the docu there is no way to extract HDD from the pool without destruction of it.


I was afraid of that, do you know how to move the data to a new created single disc? Could you walk me through the proces step by step? Many thanks in advance!

Second question, since the new disc will of course also be a virtual disc will i be able to shrink / enlarge the disc? Is it sort of the same as in Windows? Where you can just adjust a partition size.

As I wrote, not familiar with ZFS. According to the

Simply do backup of data, move it somewhere else and:

sudo zpool destroy ncdata

will destroy the old pool (you may need -f to force).

sudo zpool export ncdata

will disconnect the pool.

yes, but I think not in a constalation as you have it. In your current case you can only expand “last” drive or that is on the end of FS.

Why you have 2 drives? You can simply replace them with 1 virtual. Personally I use more LVM (that you already have) and then will added physical dive to it (e.g. new sdd)

pvcreate /dev/sdd

then create virtual Group, e.g. nextloud-data (you can use existing one nextcloud-vg, but still you can divide them)

vgcreate nextloud-data /dev/sdd

Then simply create new Logical Volume

lvcreate -n lv-nextcloud-data -L1T nextloud-data

This will create lv-nextcloud-data with -L1T 1 TB size on a nextloud-data Virtual Group.

Then simply make SF there

mkfs.ext4 /dev/nextloud-data/lv-nextcloud-data #Or any FS that you like

Mount it somewhere

mount /dev/mapper/nextloud-data-lv-nextcloud-data /mnt/nextcloud-data/

and move the data. Then destroy disk.

You can expand your virtual HDD and run lvexpand to use more place on it.


I will have to add another virtual disc first, otherwise where can i move the data to. Right?
Can i use the same name “nc-data” ?

I will try to find out how to move the data first.

Two drives, one is for nextcloud files (config) and the other one is for user data. This is how the creator of the esx image file which i used made it. As far as i know.

Exactly, here is an example of my config with RAID 1 and RAID 0 for different purposes, LVM being also used there. Optimal hard disk setup on Ubuntu 2 x SSD (Raid1) + 2 x HDDs (Raid1)


I have cloned my nextcloud installation, changed hostname and IP. Deleted the biggest users, so less data to move during testing.

I have added a new (4th) 600GB (virtual) hard disc.
I am going to try to figure out how to exactly move the data. After moving the data, shouldn’t i also change references in config files which will point to the wrong location of the data (after moving).

Perfect post about how to move data folder is here HowTo: Change / Move data directory after installation

Only if you have a new location, you can also stop the server, move data to the new drive and mount it under old location. In this case no configuration change needed.

If you do not follow instructions from the link from above you could meet this nice bug Files amount after moving of data directory is wrong (much bigger)


Thanks again for all the help.
With regards to the bug, i already have a sort of similair problem. Like i told you, i cloned the nextcloud setup and removed all the users with much data. Ieft a couple accounts in total about 55GB. But when i look at the Nextxloud GUI it tells me 500GB is used.

I assumed that the nextcloud config wouldn’t take about 450+ GB. So do you have any idea why my Nextcloud is so big?

With regards to the moving, yes i would like to mount the new drive onder the old location. That was how i thought about it in my head :slight_smile:

But what do you mean by “stop the server off”?

Try to rescan data folder:

When i use ```
sudo -u www-data php occ files:files:scan --all
it tells me that:

There are no commands defined in the files:files namespace.
Did you mean one of these?

should i just remove :files?

You right, i change this typo.
Docu about scan you can find here: https://docs.nextcloud.com/server/19/admin_manual/configuration_server/occ_command.html#file-operations


Just so you know, i tried to remove the 1.5TB disc on an extra cloned installation and this indeed broke the pool.

So i deleted this clone and will go back to trying to move the data to disc4 from the zfs pool on the first clone.

The problem with not showing the correct amount of data used, was solved by the commands you gave me. So on to the next problem :wink:


Sorry i am really a linux n00b. I have a new disc (sdd). But before i can move i will have to create a partition “sdd1” and format the partition as lvm right? I have tried this:

From this website:

  • sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --on
  • sudo parted /dev/sdd
  • mklabel gpt
  • mkpart primary 0 214GB
    When i did this i got “the resulting partition is not properly aligned for best performance ignore/cancel” so is this correct?

Is what i did correct?

And i am not sure what to do next and how to do it. I have created a partition now right? But how do i format it as lvm? See picture for my new disc + partition.

2020-09-06 01_53_54-root@cloud_ ~

@gas85, are you on holiday or maybe not receiving the email updates? Of course i am not in a big hurry and i am very greatful for you help so far and certainly not wanna rush you :slight_smile:

Somebody else maybe a suggestion? @eehmke maybe? You helped me out before in a great way!

Sorry, have a lot to do at work and wedding anniversary also taking a lot of time :slight_smile:

First of all you need to decide what you want to achieve: are U going to expand size of your hdds? Or you want to have something like RAID 1 where data is mirrored in case one of hdds is getting broken.

Why U need sdd? To move data folder from it? If yes, then it looks a bit too small, according to your screenshot from above you have around 1.4 Tb of data, it will not fit into 200 GB sdd.

Could you please check how you are using drives now?

df -h

Then check how big is your data directory. Please replace /var/www/nextcloud with your path to the Nextcloud:

du -sh $(grep datadirectory /var/www/nextcloud/config/config.php | cut -d "'" -f4)

Basically LVM looks like this lvm_scheme_full1

How LVM works in a simple 3 steps:

  1. you initialize any drive or partition to LVM by pvcreate. You even do not need to create any partition, simply added whole drive.
  2. you will create new virtual group on selected partition vgcreate
  3. you will create new logical volume on selected virtual Group lvcreate
  4. you will create FS on a logical volume that you set with e.g. mkfs.ext4 or zfs if you need.

Now you can move data to the new volume and added new drives to expand space. This is short story… Long for example here but it is in Russian or here is in English.


first of all happy anniversary! I have removed a lot of user data from my test Nextcloud so it was easier to try/test.

I simply want to add a new (virtual) hard disc (which i think i did, the “sdd”). So i would like to move the data FROM the zfs pool to SDD. After that delete the zfs pool, and mount the data on SDD exactly the same as it was (i think that is /mnt/ncdata). According to df -h, the sdd drive should be big enough right? SDD is 200GB and Ncdata is 60GB right?

2020-09-16 12_31_56-root@cloud_ ~

so now i just type in “pvcreate”? I do not have to change my console to “sdd” first? (Like in windows from C to D partition)?

There are a lot of LVMs howto in internet. Just to not live you without anything above I post already example of reference.

You can ether reuse your existing Volume Group: nextcloud-vg (in this case skip step 2 and replace vg-next-data with nextcloud-vg), or added new one. This example will added new one.

  1. Please MAKE A BACKUP OF DB AND DATA and other relevant folders BEFORE TO START:

  2. You need to create a LVM physical volume on the partition

pvcreate /dev/sdd1
  1. Then you need to create volume Group (e.g. name “vg-next-data”)
vgcreate vg-next-data /dev/sdd1
  1. create the logical volume that LVM will use (e.g. name “lv-next-data”):
lvcreate -L 200G -n lv-next-data vg-next-data

The -L command designates the size of the logical volume, in this case 200 GB, and the -n command names the volume. Vgpool is referenced so that the lvcreate command knows what volume to get the space from.

  1. Format and Mount the Logical Volume
mkfs.ext4 /dev/vg-next-data/lv-next-data
mkdir /mnt/ncdata_new
mount /dev/vg-next-data/lv-next-data /mnt/ncdata_new
  1. Stop your nextcloud/webserver, stop your DB
  2. Move data from /mnt/ncdata to /mnt/ncdata_new
  3. Dismount /mnt/ncdata and /mnt/ncdata_new
  4. Mount your LVM to the old location
mount /dev/vg-next-data/lv-next-data /mnt/ncdata
  1. Ensure, that zfs is not mounted anymore:
  1. Start your server and check if everything ok and working.
  2. Play with your ZFS.

P.S. after you finished you can added your sdb to the LVM as physical disk:

vgextend vg-next-data /dev/sdb1

and then simply move data from sdd to the sdb:

pvmove /dev/sdd1 /dev/sdb1

And remove sdd from the LVM:

vgreduce vg-next-data /dev/sdd1

No mounts, no issues…


I have been away for a while (some personal business, sick family members :frowning: ), but i am back :slight_smile:
I hope you are still happily married :stuck_out_tongue_closed_eyes:

This is what i did:

pvcreate /dev/sdd1
vgcreate vg-next-data /dev/sdd1
lvcreate -L 190G -n lv-next-data vg-next-data

mkfs.ext4 /dev/vg-next-data/lv-next-data
mkdir /mnt/ncdata_new
mount /dev/vg-next-data/lv-next-data /mnt/ncdata_new

  1. sudo systemctl stop apache2
  2. sudo systemctl stop postgresql
  3. sudo rsync -avP /mnt/ncdata /mnt/ncdata_new/
  4. umount /mnt/ncdata
  5. umount /mnt/ncdata_new
  6. mount /dev/vg-next-data/lv-next-data /mnt/ncdata

This is what lsblk shows now:
2020-12-17 21_47_01-root@cloud_ ~

so i think i did it :slight_smile: I do not have to do some sort of SAVE ALL right? Not that after a reboot the mounting point, points to the previous partition again :stuck_out_tongue:

Is there a way to rename the "vg–next–data-lv–next… ? Just because it looks a bit ugly :stuck_out_tongue:

Do i still have to do the “P.S.” part? I am not sure why i should add sdb to LVM as a physical disk. And why should i move the data back from sdd to sdb? Isn’t the next thing to do just delete/remove sdb en sdc? And after that remove the virtual drives?

I am talking about this part:

P.S. after you finished you can added your sdb to the LVM as physical disk:

vgextend vg-next-data /dev/sdb1

and then simply move data from sdd to the sdb:

pvmove /dev/sdd1 /dev/sdb1

And remove sdd from the LVM:

vgreduce vg-next-data /dev/sdd1

No mounts, no issues…

To me it seems a bit like it is done after step 11, of course not saying you are wrong just curious :slight_smile:

Again many thanks so far! after these final steps i am gonna clone my Nextcloud again and do this proces again in one evening. If that goes well, i will do this to my “production” Nextcloud installation. (Of course not without a proper extra backup :slight_smile: )

1 Like


I have tried to skip the “P.S. part” and reboot the VM, but then i am not able to login my user account an during the reboot i see all kind of ZFS errors. So apparently i still have to remove and links to the ZFS pool?