Access files from another docker container
Apologies if this has already been asked/answered, I’ve been searching for a few hours with no fix in sight.
The Issue: Unable to access files stored in another docker container from Nextcloud
The Setup: The same path on the docker host is shared for both containers, however the files are owned by root on the host.
Containers:
- Next cloud with access to the folder all docker images are held
- Agent Zero
- Techdox
Techdox can see the files but is read-only so I know the files can be seen, but NextCloud doesn’t see them at all.
The Basics
- Nextcloud Server version (e.g., 29.x.x):
- Operating system and version (e.g., Ubuntu 24.04):
- Operating system: Linux 6.8.0-107-generic #107-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar 13 19:51:50 UTC 2026 x86_64
- Web server and version (e.g, Apache 2.4.25):
- Apache/2.4.66 (Debian) (apache2handler)
- PHP version (e.g, 8.3):
- Is this the first time you’ve seen this error? (Yes / No):
- When did this problem seem to first start?
- Installation method (e.g. AlO, NCP, Bare Metal/Archive, etc.)
If Nextcloud has not added files to the data directory then they will not be shown. NC maintains a database with the file entries. Changes in the data folder that are not done by NC are not recognized by NC. There is an occ command to scan the files in a data folder. But that would be called whenever changes of the data folder are done externally of NC.
1 Like
I had found an old post stating the same, but the command didn’t work and running the OCC command didn’t show there was a scan command. Do you know what itwould be for the latest version:
Yes, the command is still there in current Nextcloud versions. The relevant one is:
php occ files:scan --all
or for a specific user:
php occ files:scan <user_id>
and if you only want to rescan one folder/path:
php occ files:scan --path="/<user_id>/files/<folder>"
The current Nextcloud occ documentation still lists files:scan with --all, user-specific scans, and the --path option. (Nextcloud)
If you are running Nextcloud in Docker Compose, the usual pattern is to run it inside the Nextcloud container, for example something like:
docker compose exec -u www-data app php occ files:scan --all
or whatever your Nextcloud app container is named.
One important detail: this only works if the files are actually visible inside the Nextcloud container at the mounted path and readable by the web server user. If the files are only present in another container, or the bind mount / permissions are not correct, files:scan will not magically find them. Nextcloud only indexes what its own container can access. That matches the issue described in the original post, where the files are on a shared host path but not appearing in Nextcloud. (Nextcloud community)
I also put together a community Docker-based Raspberry Pi / Nextcloud stack here in case it helps as a reference for structuring the services more cleanly:
https://github.com/iamjavadali/nextcloudpi
It separates the services into their own directories and Compose-based setup, which may make it easier to troubleshoot mounts and service boundaries if you are comparing setups. It is a community project, not an official Nextcloud / NextcloudPi release. (Nextcloud)
1 Like
Ah its a php command, I was running OCC from /usr/src/nextcloud/occ
So this ran but did throw an error
Error during scan: opendir(/home/DockerFiles/**redacted**/data/default): Failed to open directory: Permission denied
But I can see other containers files are now showing 
Is there a cron that runs this already or does it need setting up?
Good sign. That means the scan command itself is working and the remaining problem is mostly permissions on that path.
The Permission denied message means Nextcloud can see that location exists, but the user running the Nextcloud process does not have permission to open part of that directory tree. In Docker setups that usually means the mounted host path permissions or ownership do not line up with the web server user inside the container. The files:scan command is still the correct tool for this part. (docs.nextcloud.com, help.nextcloud.com)
On the cron question: Nextcloud does have background jobs, and the recommended production mode is Cron, but that is mainly for Nextcloud’s regular background tasks. It is not the same thing as automatically doing a full files:scan --all of arbitrary local files you add outside of Nextcloud. For external storage, the docs say Cron/Webcron helps detect changes, and they also note you may need to run occ files:scan periodically yourself if you want regular rescans. (docs.nextcloud.com, docs.nextcloud.com)
So I’d treat it like this:
-
if these files are being added outside Nextcloud and you want them indexed regularly, set up your own host cron job to run docker compose exec -u www-data <nextcloud-service-name> php occ files:scan ...
-
if this is supposed to behave more like external storage sync/detection, make sure Nextcloud background jobs are set to Cron in admin settings
-
fix the permissions on /home/DockerFiles/.../data/default first, otherwise scans will continue to skip or fail on that part of the tree
For example, many people narrow the scan instead of using --all, such as scanning just the affected user or path, which is less heavy on the system:
docker compose exec -u www-data app php occ files:scan <user_id>
or
docker compose exec -u www-data app php occ files:scan --path="/<user_id>/files/<folder>"
If the files are on a bind mount from the host, I would next check:
-
ownership on the host path
-
directory execute/read permissions
-
which UID/GID the Nextcloud container is effectively running as
-
whether every parent directory in that path is traversable by that user
That permission error is the main blocker now.
The core issue is UID mismatch between the two containers. Nextcloud’s Docker image runs as www-data (UID 33 typically), but the files from the other container are owned by root (UID 0). Even though both containers mount the same host path, the Linux kernel enforces permissions based on the numeric UID, not the username.
Three approaches depending on your setup:
Option 1 — Match UIDs across containers: Configure the other container to write files as UID 33 (or whatever UID Nextcloud uses). Check what Nextcloud expects:
docker exec nextcloud id www-data
Then configure the other container to run as that same UID, or add a group that both UIDs share.
Option 2 — Use a shared group: On the host, create a shared group and add both container UIDs to it. Then set the shared directory to group-readable:
groupadd -g 1000 sharedfiles
chown -R :sharedfiles /path/to/shared
chmod -R g+rX /path/to/shared
Mount with the same GID in both containers using Docker’s --group-add flag.
Option 3 — Bind mount with Nextcloud external storage: Instead of trying to make Nextcloud’s internal scanner see the files directly, use the External Storage app (local storage type) and configure it to mount the shared path. The external storage plugin handles permission checks differently and can be configured to run the scan as a different user.
After fixing permissions, trigger a rescan:
docker exec -u www-data nextcloud php occ files:scan --all