Nextcloud Docker AIO Initial Setup Fails with NAS Mount - Working Solution

Issue Description

When trying to set up Nextcloud AIO with an external mount according to

NAS mount (/mnt/nas/nextcloud) as the data directory, the initial installation fails repeatedly with:

flo@homeserver:~/docker-projects/nextcloud$ docker logs nextcloud-aio-nextcloud
Connection to nextcloud-aio-database (172.21.0.3) 5432 port [tcp/postgresql] succeeded!
+ '[' -f /dev-dri-group-was-added ']'
++ find /dev -maxdepth 1 -mindepth 1 -name dri
+ '[' -n '' ']'
+ set +x
Enabling Imagick...
WARNING: opening from cache https://dl-cdn.alpinelinux.org/alpine/v3.20/main: No such file or directory
WARNING: opening from cache https://dl-cdn.alpinelinux.org/alpine/v3.20/community: No such file or directory
Connection to nextcloud-aio-redis (172.21.0.4) 6379 port [tcp/redis] succeeded!
The initial Nextcloud installation failed.
Please reset AIO properly and try again. For further clues what went wrong, check the logs above.
See https://github.com/nextcloud/all-in-one#how-to-properly-reset-the-instance

The container goes into a restart loop when trying to use:

environment:
  - NEXTCLOUD_DATADIR=/mnt/ncdata
  - NEXTCLOUD_MOUNT=/mnt
volumes:
  - /mnt/nas/nextcloud:/mnt/ncdata

even with

sudo chown -R www-data:www-data /mnt/nas/nextcloud
sudo chmod -R 750 /mnt/nas/nextcloud

Working Solution

The installation works when using the default configuration without external mounts:

services:
  nextcloud-aio:
    image: nextcloud/all-in-one:latest
    container_name: nextcloud-aio-mastercontainer
    restart: always
    ports:
      - "127.0.0.1:8080:8080"
    environment:
      - APACHE_PORT=11000
      - APACHE_IP_BINDING=0.0.0.0
      #- NEXTCLOUD_DATADIR=/mnt/ncdata
      #- NEXTCLOUD_MOUNT=/mnt
      - PHP_MEMORY_LIMIT=4G
      - PHP_UPLOAD_LIMIT=16G
      - SKIP_DOMAIN_VALIDATION=true
    volumes:
      - nextcloud_aio_mastercontainer:/mnt/docker-aio-config
      - /var/run/docker.sock:/var/run/docker.sock:ro
      #- /mnt/nas/nextcloud:/mnt/ncdata
    networks:
      - proxy-network

volumes:
  nextcloud_aio_mastercontainer:
    name: nextcloud_aio_mastercontainer

networks:
  proxy-network:
    external: true

Steps Taken for Clean Reset

  1. Stopped all containers:
docker stop $(docker ps -a | grep 'nextcloud-aio' | awk '{print $1}')
  1. Removed all Nextcloud containers:
docker rm $(docker ps -a | grep 'nextcloud-aio' | awk '{print $1}')
  1. Removed all Nextcloud volumes:
docker volume rm $(docker volume ls -q | grep nextcloud)
  1. Removed the network:
docker network rm nextcloud-aio
  1. Cleaned up NAS directory:
sudo rm -rf /mnt/nas/nextcloud

Question

Is there a recommended way to set up Nextcloud AIO with an external NAS mount? The documentation suggests it should be possible, but the initial setup fails when attempting to use NEXTCLOUD_DATADIR and NEXTCLOUD_MOUNT.

System Details

  • Docker version 27.3.1, build ce12230
  • OS: Ubuntu 24.04
  • Nextcloud AIO version: latest
  • Mount type: NFS mount at /mnt/nas/nextcloud

Any help or guidance would be greatly appreciated!

Would you like me to add any additional details to the forum post?

I don’t know if it will help or not, but I think there’s some confusion around the data folder and how this AIO instance is using it. I had a folder structure from a previous OwnCloud install that looked something like:

/mnt/tank/cloud

Under that folder were all of the user folders, e.g.,

/mnt/tank/cloud/user1
/mnt/tank/cloud/user2

In my mind, the configuration would look like this:

NEXTCLOUD_DATADIR=/mnt/tank/cloud
NEXTCLOUD_MOUNT=/mnt/tank

However, I could not get that to work at all since Nextcloud, for some reason, wanted write access to /mnt/tank. My assumption would have been that if I gave it a data folder, it would only write to that folder and subfolders beneath that, not the parent. So you might check the permissions on /mnt or try creating a system that is one folder deeper like:

NEXTCLOUD_DATADIR=/mnt/ncdata/data
NEXTCLOUD_MOUNT=/mnt/ncdata

and see if that works. That solved my problem initially, but on the upgrade last night everything is broken again so I’m back to the drawing board.

This topic was automatically closed after 90 days. New replies are no longer allowed.