External storage app vs. bind mount

Nextcloud version: 15.0 (via docker “nextcloud:latest” which should be 15.0.0-apache, at time of this writing)
Operating system and version: Ubuntu 18.04

Scenario:

  • files (fotos, documents) are on my server, changes may happen through nextcloud or another mechanism (rsync from other computers).

What’s the preferred way to include my personal data files into the docker-container?

  • mount my data to the docker-container and then add it via “External storage app” and “local storage” or
  • directly bind mount my data to the data/{myuser}/files/ folder created by nextcloud

Will there be differences between these two, except from having an extra “folder” in UI, when using the “External storage app”.

Will it make for example differences if I do “sudo -u www-data php occ files:scan --all”?

Thanks!
Cyber1000

Hello Cyber1000,

I know it’s been a while since you wrote your post on the Nextcloud forum. I am in the same kind of questioning with regards to my own Nextcloud on Docker setup.

Which way did you finally go ? I happen to have some issues with external_storage and Docker volumes.

Many thanks,
Michel

Is Docker really necessary on a home server? After all, Nextcloud is only a PHP application in a folder.

I share the same question OP.
Seems like External Storage pointing to the mount point outside of /…/nc/www/data//files seems the neater way…?

NOTE: Freenas 11.3 with Nextcloud 17 in Jail/VM

e.g. I have /mnt/nextcloud in jail mounted from Freenas main pool.
Currently I’m mounting directly to main user/admin account inside data directory and sharing this with other users via NC GUI.
The questions I have are:

  1. Which is more efficient from a file access/storage point of view (CPU/RAM)
  2. Does the #occ files:scan --all function cycle through each user as if the external storage was a part of their system?

My situation is ~1.6TB of required synced data (files sizes ranging from <1MB to 500MB). The files:scan function has not finished yet (4 hours) and if this has to run for each user it would not be good.

It seams the external storage option is neater and easier to manage but if the overhead is too high for larger storage pools then a direct mount seems like a more efficient option.

Any comments, thoughts on this are appreciated.

occ files:scan --all will indeed go through all users one at the time.
Make sure you use good caching, like redis. This is often enabled for php-fpm, but not for php-cli (command line client).

Where is your SQL server stored? What filesystem do you use for it? occ files:scan cause a lot of SQL accesses, so you may need to optimise the SQL server.

Also, check if opcache is enabled: https://graspingtech.com/speed-up-wordpress-by-enabling-zend-opcache/

Thank you for the reply.
So if I have 3 users and they share an ‘external storage’ directory will occ files:scan --all scan each users ‘external storage’ separately? i.e. 3 times, 1 for each user? That will kill that option.
Also, just running some more tests: Win Desktop client cannot sync the external storage dir…?
“There are folders that were not synchronized because they are too big or external storages: Local - Nextcloud”

NOTE: This directory is 120MB and has a 100MB file in it + others… Shouldn’t be an issue of size?
NOTE: I’m mounted a directory /mnt/nextcloud instead of /usr/local/www/nextcloud/data/[user]/files and used that [/mnt/nextcloud] as the external storage (local option)

To answer your question: Mysql is the server on the same Jail/VM. Not sure about caching as using standard install from Plugins (Freenas). FIrst time for Nextcloud.

Here is a occ files:scan --all for my setup:

+---------+--------+--------------+
| Folders | Files  | Elapsed time |
+---------+--------+--------------+
| 17400   | 306514 | 00:04:02     |
+---------+--------+--------------+

mysql will be very I/O bound, especially on spinning harddisks. You should really go with redis cache.

This is my redis config for nextcloud:

  'memcache.local' => '\\OC\\Memcache\\Redis',
  'memcache.distributed' => '\\OC\\Memcache\\Redis',
  'memcache.locking' => '\\OC\\Memcache\\Redis',
  'redis' =>
  array (
    'host' => '/tmp/redis.sock',
    'port' => 0,
    'timeout' => 0,
    'dbindex' => 0,
  ),

Of course, if you run redis in a jail, the socket file must be accessible from the php and php-fpm instances too. Otherwise you need to use IP instead of sockets.

Thank you. I’ll take another look at the scan and setup.