Nextcloud version: 15.0 (via docker ânextcloud:latestâ which should be 15.0.0-apache, at time of this writing)
Operating system and version: Ubuntu 18.04
Scenario:
files (fotos, documents) are on my server, changes may happen through nextcloud or another mechanism (rsync from other computers).
Whatâs the preferred way to include my personal data files into the docker-container?
mount my data to the docker-container and then add it via âExternal storage appâ and âlocal storageâ or
directly bind mount my data to the data/{myuser}/files/ folder created by nextcloud
Will there be differences between these two, except from having an extra âfolderâ in UI, when using the âExternal storage appâ.
Will it make for example differences if I do âsudo -u www-data php occ files:scan --allâ?
I know itâs been a while since you wrote your post on the Nextcloud forum. I am in the same kind of questioning with regards to my own Nextcloud on Docker setup.
Which way did you finally go ? I happen to have some issues with external_storage and Docker volumes.
I share the same question OP.
Seems like External Storage pointing to the mount point outside of /âŚ/nc/www/data//files seems the neater wayâŚ?
NOTE: Freenas 11.3 with Nextcloud 17 in Jail/VM
e.g. I have /mnt/nextcloud in jail mounted from Freenas main pool.
Currently Iâm mounting directly to main user/admin account inside data directory and sharing this with other users via NC GUI.
The questions I have are:
Which is more efficient from a file access/storage point of view (CPU/RAM)
Does the #occ files:scan --all function cycle through each user as if the external storage was a part of their system?
My situation is ~1.6TB of required synced data (files sizes ranging from <1MB to 500MB). The files:scan function has not finished yet (4 hours) and if this has to run for each user it would not be good.
It seams the external storage option is neater and easier to manage but if the overhead is too high for larger storage pools then a direct mount seems like a more efficient option.
occ files:scan --all will indeed go through all users one at the time.
Make sure you use good caching, like redis. This is often enabled for php-fpm, but not for php-cli (command line client).
Where is your SQL server stored? What filesystem do you use for it? occ files:scan cause a lot of SQL accesses, so you may need to optimise the SQL server.
Thank you for the reply.
So if I have 3 users and they share an âexternal storageâ directory will occ files:scan --all scan each users âexternal storageâ separately? i.e. 3 times, 1 for each user? That will kill that option.
Also, just running some more tests: Win Desktop client cannot sync the external storage dir�
âThere are folders that were not synchronized because they are too big or external storages: Local - Nextcloudâ
NOTE: This directory is 120MB and has a 100MB file in it + others⌠Shouldnât be an issue of size?
NOTE: Iâm mounted a directory /mnt/nextcloud instead of /usr/local/www/nextcloud/data/[user]/files and used that [/mnt/nextcloud] as the external storage (local option)
To answer your question: Mysql is the server on the same Jail/VM. Not sure about caching as using standard install from Plugins (Freenas). FIrst time for Nextcloud.
Of course, if you run redis in a jail, the socket file must be accessible from the php and php-fpm instances too. Otherwise you need to use IP instead of sockets.