Nextcloud with Docker and zfs

Hello, i just did docker + nextcloud setup. It all works when testing with dummy data and folders. I am able to create documents and foldersā€¦ So hurray!

How ever i have few questionsā€¦

First i share my setup just in case

  • Pc installed with ubuntu server 21.04, headless, ssh
  • Docker
  • 2 mirrored data drives in zfs, (connected same mobo as server)
  • seperate os ssd
  • only local network (for now)

I installed nextcloud with docker to avoid php/database/other version issues. As i mentioned everything works fine. I mounted my dummy data folder inside docker and added it to nextcloud as external storage.

  1. Is there any harm binding zfs pool directly to docker? Or what is the suggested way?
    [EDITED QUESTION 1 TO BE MORE ACCURATE]

  2. I have plans/maybe/thinking to add more clients (like calibre container) accessing same data as nextcloud. Will i run into data corruption or other issues? (Tips to avoid?)

[EDIT FOR QUESTION 2, it will probably cause some mayhemā€¦]

Sorry for being little bit out of nextcloud scope but i hope someone here has done something similarā€¦

I donā€™t think you can bind the zpool directly to Docker, but you can bind individual datasets under the zpool. I.e. you create some dataset that you want to use for NextCloud under your zpool base dataset, make a mountpoint and then you make a bind mount in your Docker NextCloud config to that location.

As to you second question, I am wondering the same myself. If you only allow write access do the dataset for one of you services I guess it should be OK, but if several need write access to the same there might be trouble. I do not know the best practices to handle this. Maybe someone else here has the answer.

for my understanding: ā€œdonā€™t do thisā€ is the answer as I understood the question.

both nextcloud and calibre are ā€œappsā€ that store their data in a ā€œspecial wayā€. that is to say there are additonal folders for trash, keys, meta data in nextcloud. calibre stores somewhere an internal database. both apps are designed to hide this complexity from the user.

and you want to expose it to another app?

if you want to go ahead: in your calliber docker compose file just add:

grafik

where you replace <path to calibre library> with the host path where you want to store your ebooks.

and just add a line - <path to calibre library>:/data/<username>/ebooks

grafik

this screenshots apply to the linuxserver.io images. if you use other images that may vary.

Is that to say you cannot share the NextCloud data with any other apps or that you need to take special consideration with what data is shared? If nothing can be shared with other applications that kind of limits its usage doesnā€™t it? If I want (or allow other user) to use NextCloud to remotely upload files, media, etc. to be used by other applications or be available for internal/local sharing protocols like Samba, that would not be possible?

i have jolpin and keepass2android as two apps using nextcloud as data backend. through the webdav interface. of course thatā€™s working.

that is possible but - imho - a pain in the ā€¦

1 Like

Iā€™d recommend to use the external storage app in Nextcloud to mount the local directories that shall get shared between containers. It should handle access and external changes well enough.

I did get the datasets to work, well not fully.

There seems to be some permission issue. I can read old data that already existed inside pool and datasets. How ever when i create new folder in nextcloud web ui it works fine and gets permissions 750ā€¦Problem is creating new filesā€¦ If i try to create file it gets created in dataset with permissions 070, i cannot open, read, write, do nothing with it from nextcloud ui. From server terminal i can do anything to them so files and folders exists. Owner and group seems to be right.

When i chmod the file for something like 750 or 777 it still does not work from the browser ui.

I have to remove everything from docker, nextcloud, database and volume. Then it works and same thing repeats i create file it gets weird permissions and to modify it i have to remove everything from dockerā€¦

Any thoughts where this permission 070 would become?

Okay i performed test mount to another folder on the machineā€¦ the folder did get permission 755 and file 644 when i created them from nextcloud web uiā€¦ So i guess there is something definitely something wrong with permissions in zfs datasetsā€¦ I think the docker and nextcloud configuration works fineā€¦

my docker NC instance stores data on ZFS dataset since 2-3 months so far I no issues. The only problem is you need to change docker storage backend and you loose all the docker content you had before.

cat /etc/docker/deamon.json
{
  "storage-driver": "zfs"
}

I donā€™t see permissions issues eitherā€¦

Regarding sharing of storage directory: donā€™t do this - Nextcloud is designed to control the storage itself - files have their meta-data like permissions, comments and shares stored in NC database the application can only keep the DB in sync if changes happen through the application. If changes to files happen from outside of Nextcloud the admin need to run occ files:scan command to detect this changes, which is definitely not the recommended approach to keep track of regularly changed files. If the other software can use WebDav as a storage you definitely can use Nextcloud as storage back-end, this way Nextcloud can track all changes and database remains in sync with files (and you get version control and recycle bin).

cat /etc/docker/deamon.json
{
  "storage-driver": "zfs"
}

Ainā€™t this only required when docker is installed on zfs system? Mine is installed on ext4ā€¦ Iā€™m in the believe that mounting doesnā€™t care about the filesystemā€¦ Correct me if iā€™m wrong.

I have made some further observationsā€¦ My gitea container accesses the same pool (different dataset) and there is no issues with that it just worksā€¦

Inspired by this i created new dataset in pool and mounted that to nextcloud containerā€¦ No issues everything works.

Guess i have two options

  1. Try to figure out what is wrong with old datasetsā€¦ (the pool was imported from freenas, and i was quite noob back then when this was created so i have no doubts i have messed something back thenā€¦ :grinning_face_with_smiling_eyes:)
  2. Create new datasets and transfer data to them and destroy old datasets after doneā€¦

And about that multi app data access:
As already pointed out you should not try to by pass nextcloud, it will mess everything upā€¦ Nextcloud offers API to access data. Docs: Clients and Client APIs ā€” Nextcloud latest Developer Manual latest documentation. So in short everything should go through nextcloud. In my case this means i have to make api calls from container x to nextcloud container. Calibre has also documentation using webdavApi with nextcloud and other cloud systems.

I personally donā€™t agree on this since the external storage app is there for excactly this purpose: being able to modify files externally, but okay.

1 Like

There might be little confusion here, perhaps my terms are bad.

I already use external storage.

I have zpool called A that is mounted somewhere in my host system lets say folder B. To access B from nextcloud that is running inside docker container, i have to create volume that allows me to bind/share this folder B for container to use. I have now access to B inside container. B is available inside container at some folder lets say C. I have defined that C as an external storage with nextcloud.

However i grant access to folder B in my host for other apps and users. Lets say there is 5 different apps (all able to read and write) and many different users. The external storage does not help with anything. You have to have cron jobs running just to update nextcloud about changes in this external storage. If i do this through nextcloud API, the nextcloud is only one able to modify data files. And how you would define unix permissions for all these apps and users to be able to read, write and execute that folder B and its contentā€¦

Thatā€™s why i came to conclusion that is better to use nextcloud API, webdav , shares or whatever options there are and access data through nextcloud.

If iā€™m terribly stupid and wrong please correct me, iā€™m here to learn :+1:.

Maybe there is something incorrectly configured.
You need to make sure that the following setting is set for each external storage mount:
image
you can make double sure that this is the case by changing it for each once to never and than switching it back to Once every direct access.

Okay i figured out my problem. It was the old zfs filesystem settings. Those did not work anymore now that iā€™m on Ubuntu server and in different user environment. The pool was transferred from Freenasā€¦

I donā€™t put my settings here because they are user and setup specific. It depends how everything is configuredā€¦ Best advice i have is to google zfs and try to understand what everything doesā€¦

So i guess that this thread is now solvedā€¦ Here are my two cents for original questions i askedā€¦

  1. I guess the zfs datasets are okay (with volumes). Although some web threads raise issues docker with zfs. I have no experience with following: Use the ZFS storage driver | Docker Documentation

  2. Data corruption - Wikipedia ā† read and decide path to takeā€¦ Common cause for corruption is failing at save/write operation but it can also occur with read operations.

Finally thanks for everybody for participating :+1:

1 Like