[NC19] How to move user data, without hickups

,

I have installed NC with the help of the VM file (sized at 1TB) created by Daniel a while ago.

Since that moment i have changed a couple of things, i have upgrade PHP to a recent version and added an extra 1TB disc. The PHP update didn’t go without a problem, but with the help of @eehmke it worked out fine :slight_smile:

This time i would like to move the user data because i am not 100% happy how this works at the moment.

Current situation:
My Nextcloud runs as a VM on a ESX7 server (cluster of 2 with shared storage). It running on 8x1,92TB Micron enterprise SSDs at 10Gbit.

Because of the VM file bought from Daniel i had 2 hard disc files, later i tried to connect a third. These disks are in a pool, which is designed this way by Daniel. The problem with this is i cannot remove the extra 1TB disk i added myself. This extra storage never worked (extra space never should up in NC, even after i made it a member of the pool), but because of the pool i cannot simply remove the disk as far as i know. After adding the third disk it turned out i didn’t need the extra space so i left the way it was (with the third disc )

I would like to simply just move all the user data to a new virtual hard disc (thin provisioning).
But when i look at the topics regarding movement, it is often full of issues and problems. Since my PHP upgrade topic worked out great (again mostly due to eehmke :slight_smile: ) i was hoping to do this movement kind of the same way.

So what is the best way to do this step by step? @eehmke it would be much appreciated if you could help out again.

I have learned quite a lot from the PHP upgrade topic, i even upgraded two more Nextcloud installations just for fun to see if i could do it :stuck_out_tongue:

How did you add it? Is this LVM, or RIAD or something else?

@gas85, thanks for your quick question.

I am pretty novice when it comes to linux. It is a virtual hard disc located on my shared storage. If this would be Windows i would say “just a normal SATA drive formatted as NTFS”. So i guess LVM?

Not a problem, do you have shell access? Could you please execute

lsblk -f

@gas85,

Yes i have and yes i can do that :slight_smile:
See attachment:

Seems you have mix of LVM and ZFS pools. To check your LVM configuration you have to run few commands:

pvdisplay
vgdisplay
lvdisplay

Read more about, e.g. here https://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/

For a ZFS you can read e.g. here https://www.techrepublic.com/article/how-to-manage-zfs-pools-in-ubuntu-19-10/ I think first that you need to do is to check status of it:

zpool status

What kind of OS do you have there?

@gas85,

I will have a look at the websites, i kinda know what ZFS (compared to RAID) is but not in detail and i certainly do not know how to handle my current question :rofl: :rofl: :rofl:

I am running Ubuntu 18.04 LTS. PHP 4.7.8, and
Type: pgsql
Versie: PostgreSQL 10.12 (Ubuntu 10.12-0ubuntu0.18.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit

I have added the output of the commands.
2020-08-27 17_14_47-

So, where is your 1TB hdd that you would like to remove? You can check them e.g. via (askubuntu):

lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL

That would show the following:

NAME   FSTYPE   SIZE MOUNTPOINT LABEL
sda           111.8G            
├─sda1 swap     121M [SWAP]     
└─sda2 ext4   111.7G /          
sdb             2.7T            
└─sdb1 ext4     2.7T            xtreme
sdc             3.7T            
└─sdc1 ext4     3.7T            titan

@gas85,

The disk hdd i want to remove is 1,5TB (not 1TB) so it is easy to see which one it is:

2020-08-28 13_40_56-root@cloud_ _home_ncadmin

@gas85,

Since i would like to learn as much as possible from the topics i open, could you also tell me why the 1.5TB isn’t usable within NC? Like i said not that i need it but i don’t get why it is not available as user space.

If not no problem, first priority is still getting ride of the 1.5TB and probably the zfs pool. Without the pool i should be able to shrink/Enlarge the disc because it is a single disc instead of in a pool (and of course a virtual disc) right?

It is very hard to read from those screenshots :slight_smile:

Not familiar with ZFS, but seems you create a ZFS mirror pool from the sdb and sdc HDDs. It should be similar to the RAID 1 - in this case you will write to them parallel to avoid data loss in case one of HDDs dies. BUT you also could use only capacity of smallest HDD, in your case it is 1 TB.
If you waht to expand your sdb with sdc, you need to rebuild you pool from mirrored to the simple pool, then you will be able to use 1+1,5 TB.

@gas85,

I am typing on my phone i am able to zoom in on the picture, but i will upload new ones.

As far as i know a ZFS pool is not like RAID1, it is more like RAID0. So remove one will break the pool(/array) than right?

There are at least 2 Types of them, one is like RAID0 is simply ZFS striped pool, another is like RAID1 it called mirror pool (here are different command listed how to create them Setup a ZFS storage pool | Ubuntu).
Basically as per this (How to create a ZFS mirror pool | TechRepublic) article looks more like you have mirrored, but have no idea how to check it.

Also here is mentioned that in case of stripped pool:

Once you add a virtual device to a pool, ZFS starts using it, so you can never remove it. You’ll need to back up your data and recreate the pool.

Theoretically if you have mirrored pool, you should be able to remove e.g. sdc from it and create another pool with only sdc, then move data to the sdc and destroy old pool. After this, added sdb to the new pool

@gas85,

Sorry i didn’t know there are more pool options.
What i have done is this:

Does this maybe tell you a bit more?

According to your docu it should be stipped, if you run

zpool list

you should see a bit more info and overall capacity should be around 2.5 TB. Is it?

In any way, according to the docu there is no way to extract HDD from the pool without destruction of it.

@gas85,

I was afraid of that, do you know how to move the data to a new created single disc? Could you walk me through the proces step by step? Many thanks in advance!

Second question, since the new disc will of course also be a virtual disc will i be able to shrink / enlarge the disc? Is it sort of the same as in Windows? Where you can just adjust a partition size.

As I wrote, not familiar with ZFS. According to the

Simply do backup of data, move it somewhere else and:

sudo zpool destroy ncdata

will destroy the old pool (you may need -f to force).

sudo zpool export ncdata

will disconnect the pool.

yes, but I think not in a constalation as you have it. In your current case you can only expand “last” drive or that is on the end of FS.

Why you have 2 drives? You can simply replace them with 1 virtual. Personally I use more LVM (that you already have) and then will added physical dive to it (e.g. new sdd)

pvcreate /dev/sdd

then create virtual Group, e.g. nextloud-data (you can use existing one nextcloud-vg, but still you can divide them)

vgcreate nextloud-data /dev/sdd

Then simply create new Logical Volume

lvcreate -n lv-nextcloud-data -L1T nextloud-data

This will create lv-nextcloud-data with -L1T 1 TB size on a nextloud-data Virtual Group.

Then simply make SF there

mkfs.ext4 /dev/nextloud-data/lv-nextcloud-data #Or any FS that you like

Mount it somewhere

mount /dev/mapper/nextloud-data-lv-nextcloud-data /mnt/nextcloud-data/

and move the data. Then destroy disk.

You can expand your virtual HDD and run lvexpand to use more place on it.

@gas85,

I will have to add another virtual disc first, otherwise where can i move the data to. Right?
Can i use the same name “nc-data” ?

I will try to find out how to move the data first.

Two drives, one is for nextcloud files (config) and the other one is for user data. This is how the creator of the esx image file which i used made it. As far as i know.

Exactly, here is an example of my config with RAID 1 and RAID 0 for different purposes, LVM being also used there. Optimal hard disk setup on Ubuntu 2 x SSD (Raid1) + 2 x HDDs (Raid1)

@gas85,

I have cloned my nextcloud installation, changed hostname and IP. Deleted the biggest users, so less data to move during testing.

I have added a new (4th) 600GB (virtual) hard disc.
I am going to try to figure out how to exactly move the data. After moving the data, shouldn’t i also change references in config files which will point to the wrong location of the data (after moving).