Amazon cloud drive ,joomla,nextcloud how to?

hi to all,

1)i did not find no guide,how can amazon cloud drive can be connected to nextcloud?

2)how can for example can be displayed in an joomla component batchly,100 categories of photos with about 200 pictures in each?of course photos are stored in nextcloud external drive let’s say google drive encrypted?

after you put the images on nextcloud right the way,you can share them and you can have the access links,in joomla there is an component hwdmediashare that lets you link remotely the picture from nextcloud,that is good if you have 100,500 pictures,but if you have 20.000 pictures how can you display them in the joomla component?

3)i m using centos 6.x with cpanel on my server,is any kind of configuration that can be done into server,in order space from nextcloud to be like an drive in windows,and joomla to be able to be able to batch the photos from next cloud into that component?the hwdmediashare haves batch capability but it can batch from server,and files are added on the server,but files needs to be on cloud,i don t wish to store them on the server.

any advice or guidance is appreciated or if any other good idea on how to do this.



Amazon Cloud Drive is “special” … you can’t connect it like an S3 bucket … So at the moment I think it ist not possible to connect it to Nextcloud.


@Carl_Sepp Perhaps you can split your post in three seperate posts. One for each question?!?!
This is easier for everyone to follow the discussion.



What do you mean by it`s special ?

As long as google drive,dropbox,can be connected to nextcloud,what is the reason amazon cloud drive can`t be used?

Is it commercial deal between S3 and next cloud ?If that is the case ,is good to know.

I wont post an new topic,because i writed very clearly my question with 1,2,3,and anyone who can help and knows to read english,i dont think is an problem to follow them up in an single post,more they are related each one with another,as things from my point of view should work togheter,being able to batch upload pictures in nextcloud does not help at all,if the nextcloud does not work to show your images into an joomla component to your family,friends in an cms website,as nextcloud is not an CMS and can`t act as one.

In theory it is possible. I would guess that nobody implemented it yet? Someone already tried to write an app for it but the current status is not clear:

I think special because it’s not accessable as S3, WebDav or … ???

And now I’m out of this topic.

How To Ask Questions The Smart Way

Can you mount an amazon cloud drive directly into your filesystem? Then you could include this as external storage to your Nextcloud.

@guddl in here : ,they are apps like netdrive,or odrive, ,Direct access to cloud storage from your desktopManage FTP, WebDAV and NAS servers as virtual drivesConnect Google Drive, Dropbox and moreJoin more than 2 million users worldwide !

Including Amazon Cloud Drive.
In Odrive:
Unified Storage

	Cloud storage is better when it's unified. odrive aggregates all your 

accounts into one system. One password, one application. See all the storage you can link to odrive. ,including Amazon Cloud Drive.

So i think,if this people did it on those apps,i think it should be possible to be added to nextcloud as well right?

@tflidd i checked out that github link,and seems that it`s back from owncloud,and that guy storycrafter,who did tryed to do something,was not helped at all by anyone from owncloud official members,and got stuck and did not get no more answers,at least that is what i see following that post link you shared.

Maybe someone or you,if you have acct there,you can inform him,about nextcloud,and do this for nextcloud,as on future,apps between owncloud could not work in nextcloud or vice-versa,for me it`s not yet clear if owncloud with nextcloud 11 will move and work togheter,as certainly it will be changes,maybe someone can confirm this officially,if owncloud will be still compatible with nextcloud,if app and all kinds of things will still work from nextcloud 11 or if it will be backported from owncloud.

I don`t think the developer:storycrafter username,knows the moves,and he seems to been blocked and nobody helps him.

As i see they speak about acd cli,rclone,rsync and linux stuff,an linux admin Christian give me an guide and i will share the guide in here,maybe you can point storycrafter to this discussion,and maybe what i will write in my next reply,will help him,to build that thing corectly,if nextcloud will also support on this,i think it will be more easy…if not,i still belive it can be done…

@tflidd how do you mount it ,if you have amazon cloud drive into your filesystem?!Any Guide?

I will make an new reply with the inform from the linux server admin.

An guide for point 3,from an linux server administrator,maybe someone finds this usefull,and include all this in an amazon cloud drive app or something to be able to connect amazon cloud drive in nextcloud.

What we want is the ability to mount the CloudDrive in Linux and encrypt everything before uploading.

Let’s get cracking.

Part I: acd_cli
yadayada wrote acd_cli, hosted on github.
This jewel is a python application that connects and mounts(!) the
CloudDrive via fuse to Linux. Enough words, more installing! For CentOS7
yum install epel-release
yum install python34-setuptools w3m
easy_install-3.4 pip
pip3 install --upgrade git+
First we install the epel repository that is needed for Python 3.4
and pip (3.4 edition) which we are installing with command 2 and 3 (w3m
is a text based browser). Command 4 installs a fork(!) of acd_cli that
does some work-around needed, but more on that later on.
Try launching the command:
[arrakis ~]# acd_cli
usage: acd_cli [-h] [-v | -d] [-nl] [-c {never,auto,always}]
[-i {full,quick,none}] [-u]

acd_cli: error: the following arguments are required: action
Now you need to authorize acd_cli with your amazon account, launch:
[arrakis ~]# acd_cli init
For the one-time authentication a browser (tab) will be opened at
Please accept the request and save the plain-text response data into a file called “oauth_data” in the directory “/root/.cache/acd_cli”.
Press a key to open a browser.
Hit a key, and (I am assuming you are running a headless server) w3m
will open. Navigate in there, enter password, click OK all the way to
something like this:
“access_token”: “4c8lwGmDqF2WZbcYDHI9MZ3040ATLKPU0fQkKTMNYYtx4gqC5bqAlUVvKyZ”,
“exp_time”: 1472910000.2017406,
“expires_in”: 3600,
“refresh_token”: “nDQVHrRvgU1MI1QoLuUAPHtz7CsLp3RIANxyQ2KyMf5EvoxyyEAUeLNarax”,
“token_type”: “bearer”
Copy that text or directly save that in a file and place it at
~/.cache/acd_cli/oauth_data, then(!) resume the process. Keep that file
as private as possible, that opens your vault without any further
authorizations. Now try your connection:
$ /bin/acd_cli sync
Getting changes…
Inserting nodes.
If that’s all you’re seeing: acd_cli works and just synced to your account.
Part II: The mounting
At this point we can actually do our first mount, but before let’s agree on some basics:
We will use ‘/var/clouddrive’ as a base for all other stuff.We will store all encryption related files in ‘~/encryption’We will mount the clouddrive on ‘/var/clouddrive/encrypted/’ (encrypted files view)Your current, unencrypted data that you want to encrypt and upload resides in ‘/media’
Create the directories:
mkdir -p ${base}
mkdir -p ${base}/encrypted
mkdir -p ${base}/decrypted
Let’s mount Amazon CloudDrive right in there for the first time:
acd_cli mount ${base}/encrypted/
The command should exit without any errors, check that it works:
[plex@arrakis ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
ACDFuse 100T 1G 100T 1% /var/clouddrive/encrypted

[plex@arrakis ~]$ ls -lha /var/clouddrive/encrypted
total 0
drwxrwxrw- 1 acd acd 0 Aug 31 13:14 .
drwxr-xr-x 4 acd acd 38 Aug 31 10:33 …
drwxrwxrw- 1 acd acd 0 Aug 31 13:13 Documents
drwxrwxrw- 1 acd acd 0 Aug 31 13:14 Pictures
drwxrwxrw- 1 acd acd 0 Aug 31 13:14 Videos
The mount is active and you can even list the things that are in the
CloudDrive. Compare that to the web-interface and it should match. If
you don’t care about encryption then your journey can end here. It
should not, however.
We do not need the mount for now, dismount with
umount /var/clouddrive/encrypted
Part III: The disciphers of Data
Disciples, Disciphers… As long as you
keep reading and preach the word of encryption it should matter not.
Less talk, we have data to encrypt. I did create a “Storage” directory
inside the CloudDrive which I’ll use for the encfs. You can encrypt all
of it, but you might want to have, at some point, an unencrypted area
for sharing or whatnot. Better keep your options open.
install encfs:
yum install encfs
Here is the thing: You do have a lot of
data right now that you want to upload and fast. rsync comes to mind;
but rsync and acd_cli do not play nice at this time (extremely high cpu
load, extremely low throughput), that’s why we clone the fork of
acd_cli– still, uploading is a mess. This is why we will not use rsync
for the initial upload but some other file. More on that later; lets set
up the encryption. For that we will utilize encfs in its reverse mode,
that is, it will mount a virtual encrypted view of the data that it will encrypt on the fly. Normally encfs works the other way around: It mounts encrypted data that that it decrypts on the fly.
Normal Mode: On Harddisk: Encrypted files; encfs-mount shows decrypted files.Reverse Mode: On Harddisk: Decrypted files; encfs-mount shows encrypted files.
But since your data is decrypted on your, say, NAS, we will upload the encrypted view of that. Setup time.
Before you start the commands below,
create a password. I suggest using ‘pwgen’ with ‘pwgen -s 128 1’, this
will create a single, secure 128 chars long password.
mkdir -p ${base}/reverse
encfs --reverse /media /var/clouddrive/reverse
Creating new encrypted volume.
Please choose from one of the following options:
enter “x” for expert configuration mode,
enter “p” for pre-configured paranoia mode,
anything else, or an empty line will select standard mode.
?> x

Manual configuration mode selected.
The following cypher algorithms are available:

  1. AES : 16 byte block cipher
    – Supports key lengths of 128 to 256 bits
    – Supports block sizes of 64 to 4096 bytes
  2. Blowfish : 8 byte block cipher
    – Supports key lengths of 128 to 256 bits
    – Supports block sizes of 64 to 4096 bytes

Enter the number corresponding to your choice: 1

Selected algorithm “AES”

Please select a key size in bits. The cypher you have chosen
supports sizes from 128 to 256 bits in increments of 64 bits.
For example:
128, 192, 256
Selected key size: 256

Using key size of 256 bits

Select a block size in bytes. The cypher you have chosen
supports sizes from 64 to 4096 bytes in increments of 16.
Alternatively, just press enter for the default (1024 bytes)

filesystem block size: 4096

Using filesystem block size of 4096 bytes

The following filename encoding algorithms are available:

  1. Block : Block encoding, hides file name size somewhat
  2. Block32 : Block encoding with base32 output for case-sensitive systems
  3. Null : No encryption of filenames
  4. Stream : Stream encoding, keeps filenames as short as possible

Enter the number corresponding to your choice: 1

Selected algorithm “Block”"

reverse encryption - chained IV and MAC disabled
Enable per-file initialization vectors?
This adds about 8 bytes per file to the storage requirements.
It should not affect performance except possibly with applications
which rely on block-aligned file io for performance.

Configuration finished. The filesystem to be created has
the following properties:
Filesystem cypher: “ssl/aes”, version 3:0:2
Filename encoding: “nameio/block”, version 4:0:2
Key Size: 256 bits
Block Size: 4096 bytes
File holes passed through to ciphertext.

Now you will need to enter a password for your filesystem.
You will need to remember this password, as there is absolutely
no recovery mechanism. However, the password can be changed
later using encfsctl.

New Encfs Password:
Verify Encfs Password:

That should have created your initial
reverse view, try ‘ls -lha /var/clouddrive/reverse/’. If you see a bunch
of garbage: It worked. You also have a new file in /media/, named
.encfs6.xml. This is your supplemental encryption file that, combined
with your password will decrypt the data. Keep both at a super-safe
place. I you lose either you will no longer be able to access your data.
Move your file to our per-determined location:
mkdir ~/encryption
mv /media/.encfs6.xml ~/encryption/reverse.xml
We will store all encryption related things in ~/encryption. Also
create a file ‘~/encryption/reverse.pass’ and on a single line paste
your password there. This is optional and will help with auto-mounting.
Part IV: The Syncing
At this point you should be able to
mount amazon CloudDrive (which is not mounted currently) and having a
reverse-encfs mount up and running, let’s sync. Like I said, syncing
with rsync is currently not possible. So we will employ rclone,
a command line tool that can handle most Cloud-Provides, including
Amazon CloudDrive. Depending on your server, download the appropriate
binary from
I am using a raspberry Pi3 for the upload, which maxes my 4Mb/s nicely
with plenty of resources to space. The RP3 utilizes the ARM binary for
Linux. Download, Unpack and place rclone in ‘/usr/local/bin/rclone’. For
further installation help consult their installation docs.
You’ll then need to do a first-time setup that you only need to do once, they also have a very detailed howto for that:
Done? Now simply sync the encrypted files to your CloudDrive:
rclone sync /var/clouddrive/reverse remote:Storage/
Replace ‘remote’ with whatever you named
your remote in rclone during the setup. If you took my advice and
created a ‘Storage’ directory then that command will upload all the
encrypted files into your CloudDrive. I really, really recommend
launching that sync command from within a screen or tmux shell. If all worked well you’ll see (once a minute) a status output like this:
2016/09/02 10:19:26
Transferred: 928.978 GBytes (3.890 MBytes/s)
Errors: 0
Checks: 2
Transferred: 899
Elapsed time: 67h56m1.2s

  • …1JrzoMcLjfGxQTbL1zsG3hCu23FI05r19H6Ek4l148: 94% done. avg: 1020.0, cur: 834.5 kByte/s. ETA: 4m1s
  • …6jftoXHq-0n-AzZxg/JLV-hmzY,QGSqaLmmLKSPvcI: 3% done. avg: 739.5, cur: 775.2 kByte/s. ETA: 19m59s
  • …ClFXawwAnT,8aoErCjOH-aZP2HyHDGrmdQ3XNHhp97: 66% done. avg: 892.0, cur: 717.1 kByte/s. ETA: 11m52s
  • …VMBMmHAa23,LVkjy2zyNHHUvv4kOWI3xXBmRZg0Jk5: 31% done. avg: 905.0, cur: 734.7 kByte/s. ETA: 53m26s
    Once some files have uploaded, check your CloudDrive web-interface, you should only see encrypted files inside ‘/Storage’:

So let it run and finish uploading. If
the rclone command aborts/interrupts at some point it’s safe to restart
it. It will ignore already-uploaded files so resume is a bliss. rclone
survived several DSL disconnections so far without dropping out at all. I
am still on my first-run on that command.
Part V: Accessing your Data
At some point you might want to access your data. Let’s also assume
you want to access the data on another server. Repeat the acd_cli and
encfs installation as we did earlier. You can skip reverse mounting and
rclone completly. Create directories:
mkdir -p ${base}
mkdir -p ${base}/encrypted
mkdir -p ${base}/decrypted
mkdir -p ~/encryption
Copy the ‘~/.cache/acd_cli/oauth_data’
file from your rclone-upload server to this server so you can skip the
authorization altogether. Also copy the ‘~/encryption/reverse.*’ files
likewise. Next, create a ‘~/encryption/’ file, paste this

encfs_pass="cat ~/encryption/reverse.pass"

if [ ! -e ${base} ] ; then
mkdir -p ${base}
mkdir -p ${base}/encrypted
mkdir -p ${base}/decrypted

if [ “$1” == “mount” ] ; then
acd_cli sync
acd_cli mount ${base}/encrypted/ || exit 1
ENCFS6_CONFIG=’~/encryption/reverse.xml’ encfs --extpass="/bin/echo ${encfs_pass}" ${base}/encrypted/Storage ${base}/decrypted
exit 0

if [ “$1” == “umount” ] ; then
fusermount -u ${base}/decrypted
fusermount -u ${base}/encrypted
exit 0

echo "Please supply mount or umount."
and make it executable:
chmod 0700 ~/encryption/
This script will mount amazon CloudDrive and encfs als dismount is after. Like so:
$ ~/encryption/ mount
Getting changes…
Inserting nodes.
This will result in two mounted directories:
$ df -h | grep cloud
ACDFuse 100T 912G 100T 1% /var/clouddrive/encrypted
encfs 100T 912G 100T 1% /var/clouddrive/decrypted
You can now use the decrypted directory to your hearts desire, minus rsync for now.

The acd_cli method seems to be the best and only way today. I haven’t used it for this purpose, but I have for other reasons and it seems to work well enough.

I’m jumping back in with finishing implementation of first order support now on Nextcloud, but I see Amazon has pulled Amazon Drive access to invite only and all my ACD config is missing from their dev console. I’ve asked for an invite, and will report back when I get a response

I’m currently trying to use Nextcloud through an acd_cli FUSE mount, however Nextcloud is only able to read the mounted directory if I mount it with the -ao (allow other users to read mount) option. Unfortunately Nextcloud then complains about the directory being able to be read by users other than itself, thus locking me out of the webview:

Data directory (/var/www/nextcloud/data) is readable by other users
Please change the permissions to 0770 so that the directory cannot be listed by other users.

I don’t seem to be able to run acd_cli as www-data user either. While we wait for native support for ACD in Nextcloud, is there a way to get this working with acd_cli FUSE?

You could run the webserver as a different user, that can also run acd_cli.

the best will be to have this in nextcloud,same as we can add external gdrive,s3,ftp,etc.

@storycrafter glad you ve added that on github.

my question is regarding “fork and adapt it to call acd_cli! This should just work!”

did you try this way and is working or something?

if the adaption is attached here on the forum,we can help by testing it if its working or not,maybe some people can help with debbuging in case is not working.

after we succed with it,nextcloud team the management can add this feature as part of the core?in order to not have to hack or modify files each time we need to upgrade on nextcloud.

i think we can consider this dead project,as for today as amazon ended the unlimited offer,once the plan expire or renew,you get 60$ per year for 1tb,and any additional 1tb adds another 60$ till maximum 30tb.

just like in case of microsoft one drive unlimited,i think people abused of the unlimited plan and now exactly like in microsoft case everyone will pay.