Used docker but which storage it is using and drive letter?

Hello all, I have used cent-os 76 1811 version to run the docker image of nextcloud as a test run.
I have few questions in regards to it:

Now everything went smoothly by default:
server:
docker pull nextcloud
docker run -d -p 8080:80 nextcloud

I can download and run the android client and upload few files and can see synching immediately on windows client.

Question 1. Now question is when it runs as a docker image, where does it store the files on the server? In the image itself or somewhere else? From documentation, I see it uses sqlite as a default:
https://hub.docker.com/_/nextcloud/
…
…

Using an external database

By default this container uses SQLite for data storage, but the Nextcloud setup wizard (appears on first run) allows connecting to an existing MySQL/MariaDB or PostgreSQL database. You can also link a database container, e. g. --link my-mysql:mysql , and then use mysql as the database host on setup. More info is in the docker-compose section…
…
…

Now I have no clue about sqlite and I see launched on my host server a instance of sqlite and used .tables (the only command I know about sqlite at the time of this writing) turns up nothing:
[root@template-centos76-1810-dockerized nextcloud]# sqlite3
SQLite version 3.7.17 2013-05-20 00:56:22
Enter “.help” for instructions
Enter SQL statements terminated with a “;”
sqlite> .tables
sqlite>
sqlite>
sqlite>

Knowing this would be necessary as my storage starts to grow and need to add additional storage.

Question 2. Secondly, I see windows client adds the nextcloud pane in the explorer which is convenient but is there a way to assign a drive letter to it?

Question 3. Last one, I have opened the port on home network router 8080 so that I can login through external, public domain network but when I do so, it says “access through untrusted domain. Please see documentation for further info.” How do I mitigate this when I am running as a docker. I can perfectly upload and login through local subnet.

Thanks.,

a small docker 101: you pulled the image from docker hub with docker pull nextcloud.
with docker run you used that image to start a container.
if you didn’t specify any volume: or -v every change to files in that container stays in that container. this is bad. because if you want to update nextcloud ever you have to remove the container, pull a new image, and start another then empty container. your data is gone.

if you want to keep any of your data and config you have to read the chapter about persistent data.

any process running inside a container is listed in the hosts ps -ef. but is not using the hosts filesystem. therefore if you run sqlite direct on your cli that is another sqlite. (see also below)

don’t do this. it’s unsecure. since it’s not encrypted. only ok for testing.

with your current setup (without volumes) you would have to login to the container and edit /var/www/html/config/config.php. there is a variable for trusted domains.
docker exec -it nextcloud /bin/sh should log you into your container.
or
docker exec -it nextcloud vi /var/www/html/config/config.php

but i would strongly advice to setup a container with volumes before. :wink:

p.s.: you’ll find /var/www/html/config/config.php only inside the container after you logged in. that’s the same with the sqlite files. so vi /var/www/html/config/config.php on the hosts cli starts editing an empty file where the same command inside the container has a different output.