Trust self-signed certificate for FTPS

Hi, I’m running Nextcloud on a docker ngnix server. I am currently trying to add external storage comming from my host machine.
Option one was to mount samba shares, but on external storage I have the message “smbclient” is not installed.
Option two was to mount my shares on docker as volumes, but no matter how hard I tried, I did not succeed. Maybe folder permissions issues or i don’t know.
Option three, fire up a FTP Server on the host machine and add external storage as FTP on Nextcloud. Managed to get the FTPS server going with self-signed certificates, tested it with FileZilla and works fine.
On Nextcloud however, SSL operation failed with code 1. OpenSSL Error messages: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed at
When trying to verify with openssl command on the host: error 18 at 0 depth lookup: self signed certificate.
It’s not really an error, but how do I get Nexcloud to trust the self-signed cert?

you have to import the cert as “trusted root CA” into certificate store of the docker container. It’s not security issue and nothing bad with this approach - only this is really weird workaround for strange solution for your problem… I bet a solution with locally mounted volumes would work much easier and more straight forward…

what exactly try to do?

That’s the worst option of all three and the performance will be terrible. Apart from that, FTP is an ancient protocol and the SSL tacked on is more of a band-aid than a real solution that anyone should be using. Just my opinion on the subject. Instead of FTP/FTPS one should use SFTP nowdays. SFTP stands for Secure File Transfer Protocol, uses SSH connections and is therfore secure by deafult and you don’t have to fiddle around with self-signed certificates or any certificates at all. But all this is just a side note. :wink:

In your case Docker volumes are the way to go! I am sure @Reiner_Nippes will be able to help you with this, if you explain in a little more detail what you have already tried in this regard… :slight_smile:

I also think FTPS is bad and sftp is better. But i think also sftp is not the best solution. I tested sftp (only localhost) a few months ago and it was terrible on my system.

As described above, you should somehow be able to include the directory within the Nextcloud Docker Container and then use Local (External Storage). You must set the correct rights for the nextcloud user mostly www-data:www-data to access the files.

Thank you for all your answers, I was considering FTPS as an alternative of Local Storage, although I wasn’t aware about the low performance and low security of this alternative…
About Local Storage though,
I run nexcloud alongside mailcow, they share the same nginx docker container

This is the line on the docker-compose.yml

- /mnt/md0/todos:/mnt/todos/:rw

The permissions on the host machine of the folder look like this

drwxr-x---+ 4 www-data www-data  4096 Oct 13 20:51 todos/

Inside the nginx container tho, the permissions look like this

drwxr-x---    4 xfs      xfs           4096 Oct 13 20:51 todos

I feel like this might be what’s causing the issues, but… How do I fix it? (If this is the problem, if not, what else could it be?)

you are on the right track… check the permissions inside of Nextcloud container - once you see the folder inside of NC container and the rights are 750 www-data:www-data it must be possible to add the folder as “local” external storage mentioned by @devnull maybe you start with new folder first - without you data and play a little until it work. once you figure out right settings it’s often easy to adopt for production data…

side note: it sounds you map same folder into different containers… in general you should not do so and separate the volumes used by different containers (e.g. to avoid permissions clash as different services might expect different user rights). especially you should not share your user data with nginx container which is the entry point and therefore the most exposed service and for this reason it should have as little access as possible…

you have to get the uid of the user www-data inside your container. and than use the numerical id for the chown command on your host. and don’t care that on your host the folders just now have “fancy” owners.

run docker exec -u www-data nextcloud id to get the value of uid/gid inside the container. assuming that “nextcloud” is the name of your nextcloud container. or use the name of the nginx container.

Interestingly, when running docker exec -u www-data nextcloud id, I’m told there’s no user www-data.

$ docker exec -u www-data mailcowdockerized_nginx-mailcow_1 sh
unable to find user www-data: no matching entries in passwd file

However, the files inside the nexcloud data folder inside the nginx container are owned by 82:www-data.

drwxr-xr-x    4 82       www-data      4096 Oct 14 11:59 admin

I tried to test it like @wwe said, with an empty folder.
Inside the nginx folder, the mounted folder display this permissions:

/mnt # ls -l
total 4
drwxr-x---    2 82       www-data      4096 Oct 21 20:09 test

In nextcloud, I can mount /mnt, but not /mnt/test

PS: I had to follow these instructions for the permissions to be right
Docker volume mount and permissions: www-data on host (33) becomes xfs (33) in Alpine Linux

I’m using official nextcloud:22.1.0 and see my data mounted at /var/www/html/data inside of the container is owned by www-data:www-data which is numeric 33:33.

In general don’t worry if the name of the owner:group in your host system doesn’t match (or even doesn’t exist) - the only important association is inside of the container. This “issue” exists with docker as Linux filesystems only care about numerical guid’s which might differ when two system access the data in parallel (is what described the article your referenced)

I don’t really get the point why you check the acess rights from the mailcow nginx container

you must be using nextcloud container (myself use Apache version but even with nginx flavor there is a dedicated NC container)