SOLVED: Found out zfs doesn't work well on hardware raid

How can I move my data to another virtualbox vdi formatted in something more raid friendly like ext4? I have almost 1tb to move.

Not sure how (where) to mount a new vdi or how to best migrate the data. Any and all help is appreciated. Setting this up for friends and family.

SOLVED

I basically followed the directions of these two links, except since the VM runs postgrel I skipped the database part and just did an occ files:scan

Basic steps taken:

  1. I have been doing an rysnc backup to a second array that is hooked to the vm as a shared folder. When I do that I put Nextcloud in maintenance mode.

  2. Shutdown the vm, make another vdi of the correct size. I like to then boot these up on a simple vm with gparted installed, it could also be done with a live boot CD. I boot up and use gparted to make a partition of the whole array as ext4. Shutdown.

  3. Connect this drive to the cloud vm. Boot up, go into maintenance mode, rsync backup the files to the other array (or external HDD) you really don’t want your backup to be on the same disk/array if it’s large. Since I had previous backups mine went quick.

  4. Mount the new vdi you formatted in step two. rsync the data from your backup to it, INCLUDE hidden files or you will miss .ocdata and .htaccess in the root. Mine took about an hour and a half to move 1TB.

  5. sudo chown -R www-data:www-data /new/path/to/data

  6. nano /path/to/nextcloud/config/config.php
    ‘datadirectory’ => ‘/new/path/to/data’,

  7. turn maintenance mode off : sudo -u www-data php /path/to/nextcloud/occ maintenance:mode --off

  8. This is where I had to skip the db process in link one and use occ files:scan from link two sudo -u www-data php console.php files:scan --all

  9. Let that run, I only had two users setup so mine didn’t take long. Plus most of my files are large videos.

  10. Reboot and check everything. So far mine is working fine. I’ll update the thread if I see anything weird pop up.

I found I also had to disable zfs because it was still trying to mount ncdata at boot. Disabling did not work, deleting the ncdata file in the zfs cache did the trick.

/etc/zfs/zfs-list.cache/ncdata

Actually, IMHO it’s better to use the built in functions in ZFS to create a “RAID”. You then present two separate disks to the system and create a mirrored pool:

Glad you solved it anyway! :slight_smile:

Thank you. My issue though was I have an 8 disk hardware raid 50 array and I was having performance and other problems. Traced that back to running zfs on top of hardware raid. My controller won’t operate in HBA mode which is what zfs likes to see. I’ve read up on zfs and it looks really neat but not the best solution for the hardware I have. Since migrating to an ext4 set up I’ve had flawless performance from the VM and I am running at the full speed of the array when moving files or backing up and the user experiences is better as I am even able to stream audio and video from it. I went the hw raid route because one the hardware was free, a decommissioned server from work and two I am very familiar with hw raid having used it since the UW SCSI days. Given I now have over 20 years of family photos and videos in my cloud I feel most comfortable with a setup I am used to than learning zfs best practices and usage etc. I’ll save that for a home fileserver or other test bed.

1 Like

To add more disks to a ZFS, you simply just add them in the create command.

Here’s another random blog: