Installation method with ZFS on proxmox

Hey,

I’m using proxmox with a zfs data tank. It is a RAIDz1 with 5x hdds. I want to use the tank for my data in nextcloud and therefore I read a little bit. It seems that the best options is to use a lxc and mount the user folder to the zfs, but…

  1. It seems to be a better idea to use a vm regarding security reasosn. But I cant find any solution to use zfs for the storage. Is it possible to use nextcloud in an vm and zfs as a storage?
  2. If booth is possible, is the vm version the better option? I know that I write it in the first point, but maybe a lxc is better due to compatibility or something else.
  3. Which installations method is good, when I want a simply maintenance and secure installation for a home use with 4 persons? I saw a lxc with a turnkey installation, but on the other hand a lot of people do not recommend turnkey. There is also an AIO but I think this doesn’t work with ZFS. So, turnkey, snap or clean install?

I’m wondering why there is no recommend installation for proxmox with ZFS, or did I not find it?

@n-3 welcome back,

if you ask 5 people what they think is best… you’ll get 5000 recommendations with if’s and when’s :rofl:
go with what you prefer, plan your setup, plan your backups and make sure you know what you’re doing, what you want and your requirements. use ZFS, VFS, BTRFS or EXT4 whatever tickles your fancy.

LXC is great! personally I prefer LXD management console to proxmox because of the CLI. check out my system specs for 5+ family users. the cold-standby backup server is overkill, but really cool. my choice is Nextcloud snap. again, that’s personal preference and capabilities so do your thing!

1 Like

There is no “perfect” care-free setup for any complex application. Maybe you want to take a look at 101: Self-hosting information for beginners..

Many technologies like Snap, AiO, NCP, VM exist with the aim to help user to overcome the complexity.. but there is no free beer - each of the projects takes away complexity in one place but adds complexity in another place. AIO, snap and docker have high abstraction level so dependencies like php version are easier to manage than in bare-metal but you need to learn and understand “another” technology which is more or less a problem depending on previous knowledge and your goals. e.g. if you know docker or snap already - take the road.

IMO you focus too hard on specific hardware. I’m into Docker so I only can tell for this technology for sure but I assume it applies to VMs, LXC and snap as well - it is possible to connect host storage located on ZFS as “bind mount” into the container - I do in my installation only for data and config (application and config lives inside of the container stored on ext4). at the end you should make each decision for a reason - ZFS “itself” doesn’t give you any dis/-advantage. if you plan to use some specific ZFS features like snapshots or replication - analyze and TEST how it works in the whole application context - e.g. if snapshots cover the full application (configs and database) or maybe there are issues like database file is stored in a broken state? each technology should exist for some reason - e.g. to prevent data loss you can run replication or backup. each has advantages and drawback and you should understand both and use the one which fits your needs. another recommendation: 101: backup what and why (not how) - it’s not exactly about your question but I think it helps your to find the right path.

1 Like

Sure this is possible. A VM with a RAW disk, stores its disk on a ZVOL.
This is the default, when you installed Proxmox as a ZFS installation.

But watch out:
There is a huge difference between LXC and VMs, when it comes to storage!
LXC are on datasets. Datasets have a max recordsize setting (default 128k).
VM are on zvols. Zvols have a static volblocksize (default 16k).

Why does it matter? With your pool geometry, the stripes are not matching perfectly and your storage efficiency is bad.
You expect 80% storage like from a traditional RAID. In realty you get 66%.
The mean part; you won’t see this in the GUI, you will just notice that a 1TB VM disk uses more than 1TB in your pool. And that is ignoring that you also loose out on compression and metadata performance.

If you are interested in the technical details.

IMHO, separating data and VMs into two pools is huge.
My VMs run on fast NVME mirrors, while data is on RAIDZ2 HDDs.
I don’t now if the docker AIO supports this (it probably does).
If you want to run it “bare metal” inside a VM, maybe this would work for you.

Since you don’t seem that familiar with Proxmox, I would recommend to doing it slowly and not put real data on it but do some testing first. Proxmox, ZFS and Nextcloud at the same time is a lot, if you want to do it right.

1 Like

True true, but in the docu there is a list of possible operating systems and a recommendation for ubuntu. I would prefer debian, because I use it for all my server. But a recommendation helps if you start from scratch.

Thanks for the hint. I read it helps to get a better overview.

@wwe thanks for your information. I’m using a normal 1TB M.2 SSD for my container and vms. The ZFS storage is only for the external storage for nextcloud. Actually I have also 2 vm drives but I can move it to the ssd.

Yeah, I read a lot about it, when I setup the system. I want to change the block size to 32k to avoid this. In your link the use 64k so I have to check if 32k also is good. This has also disadvantages but I want to use it as a data dank for media in nextcloud and therefor it should be ok, or?

Like mine, but I use raidz1. How do you install nextcloud? Actually I’m think about snap or the “bare metal” inside a vm, but I think snap would be user friendly for me.

:+1: no mistake there… but read the docs first: Nextcloud snap wiki and don’t be shy to ask if you’re unsure.

Tip! du kannst auch Deutsch mit mir sprechen…

1 Like

32k would work without padding, so no storage loss there.
Having a bigger volblocksize results in more rw amplification and potential fragmentation, if you have smaller than 32k writes.

I have Nextcloud on a VM that is on Proxmox that uses NVME mirrors. The storage part points to a RAIDZ2 TrueNAS. Again, I can’t overestimate how much performance you gain, how much pain you avoid, and how much simpler it is, to separate blockstorage from data.
They have different needs, and separating them instead of trying to get a jack of all trades is so much simpler. BTW the same applies for TrueNAS and Proxmox IMHO. TrueNAS is a great NAS and not so great hypervisor. Proxmox is a great hypervisor and not so great NAS. Both have different hardware requirements. I like beer and I like wine, but I like them not mixed.

I love ZFS on both of them. Snapshots, ARC, snapshot replication tasks, svdev and the software RAID in general are IMHO a godsend, and I don’t even belive in god.

The installation I made “bare metal” with this tutorial I wrote.. It is basically the official docs but in better order and with some additional information.

1 Like

Just like you, I’m running Proxmox at home. Personally, I prefer using a VM with Ubuntu where I have Nextcloud AIO running. In a second VM, I’ve got NGINX Proxy Manager, since I self-host around 15 different apps, including websites, all in Docker containers.

It really depends on what you want to use your Nextcloud setup for.

Here’s a great reference you might find useful: my own test setup combining Proxmox + Nextcloud AIO + NGINX Proxy Manager. It’s all laid out in detail here: https://help.nextcloud.com/t/testing-large-file-synchronization-with-nextcloud-aio-and-nginx-proxy-june-2025-update/226681?u=vawaver


Here’s what you can find in that thread:

  • A large file sync test (~20 GB) using Nextcloud AIO behind an NGINX proxy on Proxmox, with excellent stability thanks to:

    • XFS filesystem
    • 12 GB RAM + ballooning enabled
  • My recommended setup:

    • Run NGINX Proxy Manager (NPM) in Docker with its nice GUI
    • Forward ports 80 & 443 from your router to the NPM host
    • Let NPM handle Let’s Encrypt SSL certificates automatically

Those are the best-practice elements if you’re planning to host Nextcloud and other services locally and want things to run smoothly.


TL;DR setup summary:

  1. VM #1: Ubuntu VM in Proxmox, running Nextcloud AIO.
  2. VM #2: Another Ubuntu VM running NGINX Proxy Manager (Docker) + other apps.
  3. Router forwards HTTP/HTTPS to NPM.
  4. NPM routes traffic to your Nextcloud instance (and other apps), with automatic SSL.
1 Like

First thank you very much for the qualified discussion and the help! All answers are really helpfull.

This will be my todays task and also the “bare meta” tutorial from @saettel.beifuss0

What do you mean with separate blockstorage from data? I think I will do this, or do you mean something else?

I will create a dataset in proxmox like:

zfs create data_tank/nextcloud
zfs create data_tank/nextcloud/admin
zfs create data_tank/n3

After installing nextcloud I would link the user data location

nano /etc/pve/lxc/lxc_id.conf

mp0:/data_tank/nextcloud/admin,mp=/var/www/nextcloud-data/admin/files
mp1:/data_tank/nextcloud/n3,mp=/var/www/nextcloud-data/n3/files

Hm, my thought was, that proxmox is my hyervisor and nextcloud is like my nas. There was also an idea to use TrueNAS but if I remember correctly I read that it is not nessasary because I can use the zfs pool in my proxmox. So instead of using nextcloud which uses truenas, which stores the data at the zfs pool, nextcloud is storing the data directly in the zfs pool.
The vm/lxc is not stored in the zpool. The zpool is only for the user data of nextcloud.

I want to store all my personal data from the familie, including a sync of all photos from the mobilephones. Actually I have a lots of raw images on an extra drive and the most documents in the cloud. I want to switch it to my storage. The main data will be at the server and a sync copie on my laptop etc. like a regular cloud.

Interesting. I actually planned to us a vpn on my phone and laptop to access the nextcloud and other services, if I’m not at home. As far as I know I can connect to the vpn automatacally, when my home wifi is not availible. An other option is to use cloudflare with a domain, but your option to use a nginx is also very nice. Actually I have a website for my photos and maybe I will get a second for work. So I can host them at my server.
I have a fiber to home connection with 1Gbit up and download and a opnsense.
First I will get nextcloud running and then I will see how to use the NGINX etc.

Personally, I don’t have good experiences with LXC containers – sooner or later I always ran into issues, and you’ll find plenty of similar reports here on the forum. That’s why I’ve been using only VMs with the configuration I mentioned above. In this setup, Nextcloud AIO runs stably and without any limitations.

I moved away from Cloudflare a long time ago, mainly because large file uploads didn’t work properly through it. You find a lot of issues here as well.
Since you have such a strong internet connection (1 Gbit symmetrical), I would definitely recommend using a public IP address and handling all domains/subdomains via NGINX Proxy Manager. For me, this is the simplest and most reliable solution – SSL certificate management and renewal is fully automated and I don’t need to intervene at all.

I like simple and functional solutions, and NPM has always delivered that for me without unnecessary complications.

Regarding VPN – for many years I used self-hosted Wireguard servers on each Proxmox server (inside a VM). But recently I switched to Netbird, which allows me centralized management and much easier access to all servers. In practice, this has proven to be a better choice.

1 Like

I read a lot today, from the bare metal installation over AIO, snap vs docker, lxc vs vm. For me less maintenance is very important therefore I will use snap. AIO is also a good package, but with snap I get a autoupdate and it is more “install and forget”. The harder desission is lxc or vm. lxc is more flexible and I like the my lxc containers. There are users with and w/ problems. My bigger installations like homeassistant and opnsense are in an vm. Maybe a lxc will work, but it think the vm will be less risky.

My opnsense gets a IPv4 and a IPv6 and I’m using the IPv4 actually. Maybe I will switch to the IPv6 or a mix. Actually I’m not into this topic. My actual plan is to setup nextcloud and then have a closer lock into ipv6, nginx and maybe netbird, or is it better to setup nginx before?

Your thinking makes sense – I also prefer VMs for setups that need to run reliably over the long term. An LXC might work, but a VM will always be the less risky choice, especially when you’re planning to use it as the family’s main storage cloud. Personally, I don’t have good experiences with LXC containers – over time I always ran into issues (and you’ll find many similar reports here on the forum). With VMs, my setup has been stable and problem-free.

Regarding Snap vs AIO:

  • Snap looks attractive because of the “install and forget” approach with auto-updates.
  • But just to note: Snap is not maintained directly by the Nextcloud company (if I am not mistaken), but by an external maintainer. This sometimes results in updates or fixes being delayed.
  • You should also be aware that Snap may run into problems if you want to use other services like Talk to the fullest. For example, setting up a High Availability server requires additional external configuration and more advanced knowledge, compared to AIO where this functionality is already included and ready to use out of the box.
  • On the forum you can also find users reporting third party app limitations…
  • Snap updates also don’t always arrive immediately, because it depends on the external maintainer to push them.
  • AIO, on the other hand, is much more robust. Troubleshooting is rare, and overall maintenance is very simple, even though it is a more “heavyweight” package.

I don’t mean to say Snap is “bad” – if your top priority is minimal maintenance and you don’t need advanced features, it can be a valid option. But if you want flexibility and reliability in the long run, AIO inside a VM will give you the better experience.

As for the network setup, I strongly recommend using NGINX Proxy Manager right from the start. It gives you a central point for all domains and subdomains, SSL and routing. Certificate management is fully automated, so you don’t have to worry about renewals.

Nextcloud snap on LXC up and running 24/7 since Sep. 2019 –

  • 4 LTS OS host upgrades without issues,
  • 28 LXC container OS upgrades without issues,
  • +40 Nextcloud-snap auto-updates without issues!

you would never… right? :star_struck:

you forget to mention the important security features of the snap:

Most importantly snaps are designed to be secure, sandboxed, containerized applications isolated from the underlying system and from other applications.

A prevalent cause of auto-update issues is attributed to incompatible third party apps due to snap confinement. Any apps requiring access to executable binaries will fail since these are not included in the snap. Disabling the misbehaving apps resolves the auto-update issue. Sometimes apps can be re-installed or re-enabled after successful manual refresh (update)… but may recur during the next auto-update. You have full control to manage if and when your Nextcloud snap is updated with snapd daemon! See managing updates.

that’s hearsay… we’re on schedule and in sync with official maintenance

this is largely true. we’re a small friendly volunteer team

agree 100%

1 Like

Oh man, I finally wanted to set up Nextcloud, but then I’ll have to deal with the reverse proxy and IPv6.

@scubamuc how do you add a zfs to your setup? Like I mentioned above, or NFS?

Generell, how to add ZFS in a nextcloud vm? NFS using the extern storage plugin?

sorry ol’ chap… my containers are ZFS (default LXC) and connecting the Nextcloud snap to NAS using local SSHFS, so I’m not much help with NFS :thinking: can’t know everything right?

see my personal notes for LXD/LXC: https://github.com/scubamuc/wiki-md/blob/scubamuc-wiki/LXD-LXC.Docker_in_LXC.md and wiki-md/LXD-LXC.Wiki.md at scubamuc-wiki · scubamuc/wiki-md · GitHub

also Deutsch if required…

the below personal entry might be important for you, but i’m unsure if Proxmox handles Docker containers differently → Proxmox vs. LXD

LXD – Run Docker inside LXC container

Be aware that this setup is basically running a container inside a container. While this has some advantages (i.e. LXC snapshots etc), it requires careful configuration. See https://ubuntu.com/tutorials/how-to-run-docker-inside-lxd-containers. The default volume format for LXC is ZFS and Docker natively uses BTRFS, thus it will be necessary to create a BTRFS volume in LXC for Docker containers. In addition security nesting must be enabled to allow Docker to “run as root” on the LXC host.

ZFS vs. BTRFS

the default volume format for LXC containers is ZFS

⚠️ Docker will not run well with the default zfs file system

Running Docker inside an LXC on a ZFS volume will prohibit persistent data storage. Thus a BTRFS volume is required for persistant storage for Docker on LXC.

Create a new btrfs storage pool

lxc storage create DCKRPOOL btrfs

Security nesting

the LXC container hosting a Docker container must have security nesting enabled so that the Docker container can “run as root” on the LXC host.

security.nesting: "true"
security.syscalls.intercept.mknod: "true"
security.syscalls.intercept.setxattr: "true"

these options may be set per container if required:

lxc config set <CONTAINERNAME> security.syscalls.intercept.mknod=true security.syscalls.intercept.setxattr=true

Security modules

lxc config set <CONTAINERNAME> security.syscalls.intercept.mknod=true security.syscalls.intercept.setxattr=true

Profiles

The easiest way to do this is to copy the default profile to create a default-docker profile with these options defined and simply assign the profile to LXC containers running Docker. See How to use profiles - LXD documentation

copy profile:

lxc profile copy 'default' 'default-docker'

edit profile:

lxc profile edit 'default-docker'

profile example

name: default-docker
description: Default Docker profile
config:
  boot.autostart: "true"
  security.nesting: "true"
  security.syscalls.intercept.mknod: "true"
  security.syscalls.intercept.setxattr: "true"
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: DCKRPOOL
    type: disk

assign/apply profile to instance

lxc profile add <instance_name> 'default-docker'

delete profile from instance

lxc profile remove <instance_name> 'default-docker'

Issue upgrading LXD host to 24.04 breaks LXC with Docker

due to some SOLVED Apprmor issues in 24.04, Docker may not start inside LXC. As a workaround remove the file /etc/apparmor.d/runc in the container and in the host.

sudo rm /etc/apparmor.d/runc

finally reinstall apparmor

sudo reinstall apparmor

restart the container

2 Likes

Proxmox doesn’t “handle” Docker containers, and as far as I know LXD doesn’t either. Both are management tools for LXC containers and KVM virtual machines. Running Docker inside an LXC container on either of these platforms is more of a workaround than a supported feature — and at least on Proxmox, it’s definitely not officially supported. :wink:

That being said, it can be done, and many homelabbers are running Docker this way successfully.

@n-3: If you decide to go down this route, I strongly recommend following a Proxmox-specific guide, because while both LXD and Proxmox rely on the same underlying technology (LXC, short for Linux Containers), they use very different management layers — the way you configure storage, networking, and nesting differs quite a bit.

1 Like

agree 100%, thanks for pointing that out :+1:

1 Like

…and VMs. And at least with Proxmox, the recommended method to run Docker is in a VM :wink:

1 Like

same goes for LXD, the recommended method is to run Docker in a VM… but as you see, it is possible using a BTRFS volume → thus all LXC advantages can be utilised and the storage can be resized at will using lxc storage resize commands → lxc storage set <volume> size=<new_size> which is why LXC is more flexible than VM aside from all the other LXC advantages.

If you put all Nextcloud data into the Nextcloud VM, that has a raw disk on ZFS, all your data is on a zvol with static 16k blocksize.

If you do the same but change the Nextcloud Data folder to some nfs share, only Nextcloud itself is on a zvol with static 16k blocksize, not your files. You separated application data from user data.

I am not deep into LXC, but think that LXC are on datasets. So there you can have all your data in the Nextcloud container and all your data is on a 128k record size dataset.

Maybe I should not have mixed TrueNAS and Proxmox into the discussion, because it creates confusion. The Proxmox vs TrueNAS discusison is not directly connected with the mantra “separate user data from application data”.

I like how easy TrueNAS offers me stuff like Cloud Replication Tasks as my Nextcloud data backup. So for me, having a Nextcloud VM running on Proxmox, while Nextcloud uses a NFS share on TrueNAS for its files, is the best of both worlds.

Since you want a fire and forget solution, AIO is probably the savest bet. Maybe even on a plain ext4 Debian? Others may chime in, I am not too familiar with Docker.