Congratulations to you for looking at LXD as a means of housing your Nextcloud instances. I think that decision alone helps protect your data from remote attacks. Anyone who jail-breaks an LXD container will need an exploit against LXD itself to get root access to a real machine. That’s not likely and certainly requires determination.
Depending upon how much tin foil you want, you can go to many measures to further protect (via obscurity at least) your data. I used LXD to run my Nextcloud instance which supported a small consulting business with global remote access. To the best of my knowledge, the installation performed flawlessly and as far as I could ever tell, without remote penetration.
The one thing I do that’s likely 90% different from many is that I also use Cryptomator to end-to-end encrypt my data, and it lets me sleep at night. Consider looking into it. If your users are operating ‘windows’ machines then they can use something called ‘Mountain Duck’ to seamlessly and if they want transparently access Cryptomator encrypted files on a remote e.g. Nextcloud instance. The integration for Linux is not so great but honestly you don’t need to be a linux expert to use the simple app they provide for that platform too (I use both Windows (for work) and Linux for personal). It doesn’t get much better than that for security (I find the Nextcloud end-to-end encryption implementation to be a bit too wierd for my liking).
If you want, you could indeed install Apache in one server, mysql in another, your Nextcloud instance in another and indeed your nextcloud data in yet another. Personally that was too much tin foil for me so I run all of those in the same container, but that’s a personal decision. The point is, you can do that (and if you do all that, run haproxy in yet another container to direct all your web services traffic - I do employ that service in one of my LXD containers since I run web servers as well as Nextcloud).
The huge advantage to running these services (especially Nextcloud) in a container is the simplicity of backups. You will read many posts on here about how to have redundant server backup capability. It is not easy if you want to retain all your customization/links/shares if you try to e.g. backup mysql and so forth. Many struggle with this (data backup is of course easy, but as you know, user data is one thing, but server-settings/profiles/accounts/personalization thereof is a time consuming task to reset). In LXD, I run this command once a night via cron:
lxc copy NC DATA:NC-backup --refresh --target-project backups
It refreshes (very fast) an entire LXC instance of Nextcloud to a remote server (at one time, this was half way around the world from each other, but I don’t need to do that now). If my live server goes down (update crash, loss of power, theft, fire or asteroid impact) I can spin up an EXACT replica with just minor router settings changes. And I mean exact. Just a different MAC ID and IP address. The former I care npothing about, the latter needs a router (or haproxy) setting change. Maybe three minutes of effort (from when I first notice it!). Boom - it all works. All my links, files, shares, apps, settings all my customization - everything is just as it was when I last refreshed (2AM or so as I recall :-)) It is just so reassuring. You can likely automate that too, but I don’t need 99.99+% up-time for my needs.
Once you try LXD, you likely won’t go back.
A word of caution: use the EXACT SAME names for your LXD data pools (I go for zfs) on local and remote servers. If you don’t, the backup copy might struggle. I have ssd-pool and hdd-pool names for my lxd zfs drives. I learend the hard way (ssd-pool, SSD-Pool, SSD-pool…) that they need to be exactly the same name for my system to work flawlessly, as it now does (touch wood!).
I am no expert, but if I can help out feel free to post or message.