Tutorial: NC VM Data Directory NFS Integration

,

Tl;dr Use Hansson vm image with main storage on nfs share.

Introduction: Got started with Nextcloud about 2 years ago (Ubuntu 18/NC 14) when I wanted to part ways with drive/dropbox for storing & sharing with friends/family (homelab). Already had an esxi box serving other duties so a vm instance seemed like a good fit.

This solution worked well enough but had caveats. Veeam backups kept growing larger and larger - 10GB actual storage was turning out 80+ GB backup images. . As I understand it this is a byproduct of using zfs. Changes to the file system still consume storage space even if the file is marked erased. Existing backup solution was to create a vm image every 3 months. Not best strategy in case of datastore failure. That particular vm was based on ubuntu 18. In place updates to 20.x resulted in some things breaking. Made sense to backup nc data and start over.

Hansson has a current vm out based on most recent NC and Ubuntu with a wonderful installation script. In fact the whole instance is very well thought out and implemented. However, the same caveats as above would apply.

Time to try my hand at a home nas to simplify backup functions from various devices. Truenas instance exists on the same esxi box. I know there’s the option to run NC in a sandbox but given NC is exposed to the internet I’m not comfortable with this idea. Instead NC resides in its own vm, with vswitch that has no external access. NC and the firewall software only has access to this vswitch.

NC has an option for external storage. One thought was to keep data in the NC VM but actual user content (pics, videos, etc), on the nas. Reading up on this there seems to be issues with keeping database in sync with file changes. I suppose in my case this wouldn’t be a problem given NC interface only is used to interact with this share on the nas. That still leaves a headache for keeping NC data backed up regularly.

Why not just dump the whole NC datadirectory folder to the nas? The guide that follows describes how. I’m far from an expert in ubuntu/linux. Maybe a seasoned novice if that.

Needed:

  1. Hansson NC vm deployment OVA file - Nextcloud VM – T&M Hansson IT AB
  2. Configured nfs share on file server. For sake of consistency I named this .../ncdata and assigned user/group www-data and uid/gid 33/33.

Process:

  1. Import the ova into esxi. Once imported adjust memory/cpu to your preference. Mine is configured with 2 vcpu’s and 4GB ram.
  2. While still in the vm editor, delete hard disk 2. NAS mount will be used in its place.

2.5) For testing purposes, generate a snapshot at this point. Will save time redeploying ova if something goes wrong.

The following steps are best done using an ssh client. I prefer mobaxterm but putty will work too.

  1. Power on the vm and allow to boot. Connect to using ssh.

  2. Login as ncadmin/nextcloud. At the first prompt hit ctrl-c to exit the startup script. Type sudo -i. Use the same password and ctrl-c again. You should now be at a # prompt.

  3. Add the following to /etc/multipath.conf file. Ref: syslog flooded with multipath errors every 5 seconds · Issue #1847 · nextcloud/vm · GitHub

blacklist {
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^sd[a-z]?[0-9]*"
}

Restart multipathd - service multipathd restart

  1. Remove zfs (I think this should do it)
systemctl stop zfs.target
systemctl stop zfs-import-cache
systemctl stop zfs-mount
systemctl stop zfs-import.target

systemctl disable zfs.target
systemctl disable zfs-import-cache
systemctl disable zfs-mount
systemctl disable zfs-import.target
rm -r /etc/systemd/system/zfs*
rm /etc/systemd/system/zed.service

apt remove zfsutils-linux

rm /etc/cron.d/zfs-auto-snapshot
rm /etc/cron.d/zfsutils-linux

systemctl daemon-reload
systemctl reset-failed
  1. Install nfs components

apt install nfs-common

  1. Edit /etc/idmapd.conf to reflect your lan domain name
    Domain = lan.domain (or whatever yours is called)

  2. Clear nfs idmapping cache
    nfsidmap -c

  3. Edit /etc/fstab to reflect nfs mount
    {ip of nas}:/mnt/fatcow/ncdata /mnt/ncdata nfs4 defaults 0 0

10.5) Mount the share
mount -a

  1. On the nas I assigned maproot user/group as root/wheel under sharing/nfs/path. This is needed so root functions of install script can adjust permissions to the data folder. Open to suggestions how to do this differently?

  2. .ocdata needs to exist in the data folder for the install script to work.
    touch /mnt/ncdata/.ocdata

  3. Trigger installation script by logging out entirely.

exit
exit
  1. Login back in as ncadmin/nextcloud. You’ll be prompted for a password again, nextcloud.

  2. Allow the installation script to run. Install whatever modules you want. Make sure to set timezone.

Assuming everything above completed successfully the server should now be fully functional with data directory residing on the nas. In my case I still needed to transfer data from the old NC instance.

There’s quite a few backup/restore methods. I chose @Bernie_O’s ncupgrade script (BernieO/ncupgrade: Bash script to perform backups and manual upgrades of a local Nextcloud installation based on the Nextcloud documentation<br> - ncupgrade - Codeberg.org).

The script supports both backup/restore functions within one script. Supports all 3 database engines. Postgres support fully working. Supports hook scripts before database restore.

I used the following command line to generate backup in the old instance. Three archive files are created - database, nc www folder (/var/www/nextcloud), and user data folder (/mnt/ncdata).

./ncupgrade /var/www/nextcloud -w apache2 -ob -bd {destination path}

Destination path can be local or network share. Bulk of my files were in the user data folder, ~7GB worth. Database and NC archives were relatively small. You will need to either transfer these files to the new instance or mount a path so archive files are accessible.

Command line to use on new deployment after completing steps 1-15 above.

./ncupgrade /var/www/nextcloud/ -w apache2 -rb -bd {source path} -hs ./hook.sh

Hook script is needed to adjust database and redis passwords. Database restore will fail otherwise. config.php will also be updated to reflect ip of new instance as allowed in the trusted domain’s array. Recall new instsance has newly generated passwords (from install script).

My hook.sh script. Credit to @Bernie_O for the trusted domain’s code.

#!/usr/bin/env bash

#Restore psql database password from config.php  
dbname="$(getvalue_from_configphp "dbname")"
dbuser="$(getvalue_from_configphp "dbuser")"
dbpassword="$(getvalue_from_configphp "dbpassword")"
printf '%s\n' "+  Restoring database password to '${dbpassword}'"

sudo -u  postgres psql > /dev/null  2>&1  <<END
ALTER ROLE $dbuser WITH PASSWORD '${dbpassword}';
END

# Correct redis password  
REDISPASS="$(getvalue_from_configphp "password")"
REDISCFG="/etc/redis/redis.conf"
REDISPASS2=$(grep "requirepass " $REDISCFG  | awk -F" " '{print $2}')

if [ "$REDISPASS" != "$REDISPASS2" ]; then
        sed -i  's/requirepass .*$/requirepass '$REDISPASS'/' $REDISCFG
        printf '%s\n' '' "+  Redis password updated in ${REDISCFG} to ${REDISPASS}"

else
        printf '%s\n' ''"+  Redis password __NOT__ updated,"
        printf '%s\n' ''"+  ${configphp} password is ${REDISPASS},"
        printf '%s\n' ''"+  ${REDISCFG} password is ${REDISPASS2}"
fi

# Flush redis  
redis-cli -a ${REDISPASS2} --no-auth-warning -s /var/run/redis/redis-server.sock -c FLUSHALL  > /dev/null  2>&1
service redis-server restart

#update config.php with new ip
# path to 'occ' in the Nextcloud directory:
occ="/var/www/nextcloud/occ"

# read trusted_domains from config.php into array (array index might not be incremental):
while read -r line; do
  trusted_domains+=(${line})
done <<<"$(sudo -u www-data php "${occ}" config:system:get trusted_domains)"

# replace array trusted_domains with a variable (to delete the array from config.php):
sudo -u www-data php "${occ}" config:system:set trusted_domains

# replace variable again with an array containing the original trusted_domains (array index is now incremental, starting at 0):
for (( i=0; i<${#trusted_domains[@]}; i++ )); do
  sudo -u www-data php "${occ}" config:system:set trusted_domains ${i} --value="${trusted_domains[${i}]}"
done

# add an additional trusted domain (index ${i} is now at number of elements and doesn't have to be imcremented here as we started with 0):
sudo -u www-data php "${occ}" config:system:set trusted_domains ${i} --value="$(hostname -I | awk -F" " '{print $1}')"

Congrats if you’ve made it this far! New vm deployment data folder containing your old nc data now resides on the nas. As mentioned earlier, I’m open to suggestions on improving the process or optimizing. These steps were successfuly tested multiple times.

2 Likes

Things are working relatively well post migration to nfs. On the local gigabit network upload speeds are around 600-700 mbps, downloads saturate at 940 mbps. VM probably needs tuning for better local upload speeds?

Updating from 20.0.7 to 20.0.8 was slower than pre NFS. Specifically during the backup task. My guess is this this is a function of the nfs (truenas/zfs) pool arrangement. Pool consists of 2 striped mirrors (2 drives/mirror), total 4 drives, recordsize default 128KB.

image

Changing recordsize to a smaller size would probably benefit given most files for NC are tiny. However, much of the user data (media, documents, etc) is much larger. I don’t update daily so current speeds are fully acceptable.

Since I’m in the same boat but with less linux experience, I got a question: why not let Truenas present iSCSI share to an Ubuntu VM and install NC on top of it?

Is iSCSI really needed for NC operation?

What would be the benefits?

It’s been over a year since I implemented the above. Everything is still working well without any issues.

Freenas does its daily/weekly/monthly snapshots. In addition I do periodic images of the NC vm. Although, that hasn’t changed a whole lot.

Thanks for the great feedback and kind words! :heart_eyes:

I have a NFS share mounted via external storage under local drive.
How do I move the NC data directory from its current /var/www/nextcloud?

I’m reviving this thread since I found the detailed instruction super helpful (big thank you!) except I encountered an error and found the solution for it. In case someone else is interested in using Nextcloud VM mounted on an NFS share without ZFS settings inside the VM (in my case, NFS is supplied by TrueNAS) and runs into the same problem:

Issue: step 12, i.e. creating an empty .ocdata file and then triggering the installation script led to a non-functional “Apps” section in the Nextcloud web GUI. I repeated this procedure a few times and the problem persisted.

Solution: backup all content of /mnt/ncdata/ after step 4. Instead of step 12, just copy all the files including the .ocdata file back to the same directory.