Hello,
I read this blog https://autoize.com/using-s3fs-as-primary-storage-for-owncloud-or-nextcloud/
I would like to do the same for me. I tried to put data folder on s3fs mount folder but it doesn’t work
Any ideas?
Hello,
I read this blog https://autoize.com/using-s3fs-as-primary-storage-for-owncloud-or-nextcloud/
I would like to do the same for me. I tried to put data folder on s3fs mount folder but it doesn’t work
Any ideas?
did you do it during installation before you setup nextcloud or did you change a running system?
I did it on a fresh install. I set up s3fs and then, during the installation of nextcloud, I pointed the data dir to this directory. Of course, I set the right permissions first.
OK, I found a way to do it, and I’ll share it because it is a really good solution : you can buy a cheap VPS and put your data on S3 without knowing your needs for storage.
And why don’t you use the build-in S3 backend ? I see 2 reasons :
1 - I haven’t succeed to make it work : big file upload always failed because of chunked file
2 - Files are stored using names that you can not understand.
This method is only doable on existing installation, you’ll understand why later. If you want to do it on fresh install, just install locally then use this trick after.
Let’s go into it.
First, you need to mount S3 storage, using s3fs or rclone. I used rclone, I found it faster. Mind the permissions on this folder so that nextcloud can use it. Here is the service file I use for rclone mount. You can see all the options.
[Unit]
Description=rclone
After=network-online.target
[Service]
Type=simple
Environment=MOUNT_DIR=/mnt/Nextcloud
ExecStart=/usr/bin/rclone mount \
--config=/root/.config/rclone/rclone.conf \
--uid 33 \
--gid 33 \
--allow-other \
--umask 0007 \
--cache-workers=8 \
--cache-writes \
--no-modtime \
--drive-use-trash \
--stats=0 \
--checkers=16 \
--attr-timeout=24h \
--dir-cache-time=24h \
--poll-interval=30s \
--cache-info-age=60m \
--vfs-cache-max-age=1h0m0s \
--vfs-cache-max-size="10G" \
--vfs-cache-mode="full" \
--vfs-cache-poll-interval="1m0s" \
--vfs-read-chunk-size="128M" \
--vfs-read-chunk-size-limit="10G" \
S3:nextcloud "${MOUNT_DIR}"
ExecStop=/bin/fusermount -u "${MOUNT_DIR}"
#Restart info
Restart=always
RestartSec=10
User=root
Group=root
[Install]
WantedBy=default.target
Then, you have to bind mount some folders.
My Nextcloud installation use docker and use /var/www/html/data
to store the data.
Inside this folder, you can see a folder named appdata_xxxxxxxxxxxx
where xxxxxxxxxxxx is the instanceid
on the config.php file.
It’s really important for speed that this folder stays on the server filesystem.
Then you have to :
appdata_xxxxxxxxxxxx
folder elsewhere on the server/var/www/html/data
(or where you put your data on config.php file)appdata_xxxxxxxxxxxx
to /var/www/html/data/appdata_xxxxxxxxxxxx
This is why you cannot use this trick on fresh install : appdata_xxxxxxxxxxxx
doesn’t already exist.
For docker, here is my docker-compose
:
version: "3.8"
services:
nextcloud-db:
image: mariadb:latest
container_name: nextcloud-db
hostname: nextcloud-db
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --skip-innodb-read-only-compressed
restart: unless-stopped
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
networks:
- nextcloud
environment:
- MYSQL_ROOT_PASSWORD=$NEXTCLOUD_MYSQL_ROOT_PASSWORD # Mot de passe de l'utilisateur root de mariadb
- MYSQL_DATABASE=$NEXTCLOUD_MYSQL_DATABASE # Nom de la base de données à créer à l'initialisation du conteneur
- MYSQL_USER=$NEXTCLOUD_MYSQL_USER # Nom de l'utilisateur de la base de données créée
- MYSQL_PASSWORD=$NEXTCLOUD_MYSQL_PASSWORD # Mot de passe de l'utilisateur créé
nextcloud-redis:
image: redis:latest
container_name: nextcloud-redis
hostname: nextcloud-redis
restart: unless-stopped
volumes:
- redis:/data
- /etc/localtime:/etc/localtime:ro
networks:
- nextcloud
nextcloud-app:
image: nextcloud:latest
container_name: nextcloud-app
hostname: nextcloud-app
restart: unless-stopped
depends_on:
- nextcloud-db
- nextcloud-redis
volumes:
- nextcloud:/var/www/html
- apps:/var/www/html/apps
- config:/var/www/html/config
- /mnt/Nextcloud:/var/www/html/data
- appdata:/var/www/html/data/appdata_ocg6nllwh7il
# - data:/var/www/html/data
- temp:/var/www/temp
- /etc/localtime:/etc/localtime:ro
networks:
- nextcloud
- dockernet
environment:
- MYSQL_ROOT_PASSWORD=$NEXTCLOUD_MYSQL_ROOT_PASSWORD # Mot de passe de l'utilisateur root de mariadb
- MYSQL_DATABASE=$NEXTCLOUD_MYSQL_DATABASE # Nom de la base de données à créer à l'initialisation du conteneur
- MYSQL_USER=$NEXTCLOUD_MYSQL_USER # Nom de l'utilisateur de la base de données créée
- MYSQL_PASSWORD=$NEXTCLOUD_MYSQL_PASSWORD # Mot de passe de l'utilisateur créé
- REDIS_HOST=$nextcloud_redis
- TRUSTED_PROXIES=nginx-www
- OVERWRITEPROTOCOL=https
networks:
nextcloud:
driver: bridge
dockernet:
external: true
name: dockernet
volumes:
db: {}
redis: {}
nextcloud: {}
apps: {}
config: {}
appdata: {}
data: {}
temp: {}
As you can see, I have
- /mnt/Nextcloud:/var/www/html/data
- appdata:/var/www/html/data/appdata_ocg6nllwh7il
# - data:/var/www/html/data
This is because I first need to deploy docker to data
volume to create appdata_xxxxxxxxxxxx
folder, and then copy the content of this volume to S3-mounted folder and appdata
docker volume.
To adjust php settings, I use .htaccess on root folder of nextcloud installation (nextcloud
volume for me) and I put this settings :
php_value upload_max_filesize 100G
php_value post_max_size 100G
php_value max_input_time 9999
php_value max_execution_time 9999
php_value output_buffering 0
php_value request_terminate_timeout 9999
php_value set_time_limit 9999
php_value upload_tmp_dir /var/www/temp
I made sure that the timeout
directive is set for redis
'redis' =>
array (
'host' => 'nextcloud-redis',
'password' => '',
'port' => 6379,
'timeout' => 0.0,
Last thing I made to get rid of chunk is :
sudo -u www-data php occ config:app:set files max_chunk_size --value 0
Or, for docker
docker exec -u www-data nextcloud-app /bin/bash -c "cd /var/www/html && php occ config:app:set files max_chunk_size --value 0"
Hope it will help someone. At least, it will help me next time I want to set something like that
Thanks ulysse132, great post. I have a couple of tera’s to deal with, and I was wondering, did you have to change the file owner:group on your mounted bucket to match your web server’s user?