Encryption ignored on external nextcloud storage

Nextcloud version (eg, 18.0.2): 18.0.4
Operating system and version (eg, Ubuntu 20.04): official docker version (latest) running on Archlinux

The issue you are facing:
I have two instances of Nextcloud on two separate machines. One instance (I call it slave) is pre-setup and has large storage attached. Unfortunately, it shares its web space with another homepage and thus the data might get compromised.
Therefore I want to install a second instance (master) in another location that uses the slave as external storage. In the master instance, I want to enable server-side encryption. That way on the slave only encrypted data is stored and a potential intruder (on the slave) does not gain much from it.

The problem I face is that the files on the slave are stored unencrypted.

Is this the first time you’ve seen this error? (Y/N): yes, but this is the first time I try

Steps to replicate it:

  1. Set up two instances of Nextcloud
  2. Create a user on the slave to hold all the stored files
  3. Create a folder data for the user on the slave
  4. Enable encryption on the master instance (only the external storage), enable the default encryption module and relogin
  5. Optionally disable the encryption of the home storage
  6. Enable en external storage on master
  7. Add an external storage instance of type nextcloud with the settings of the slave instance
  8. Add a file to the external storage and put some plain text to it
  9. Read the written file on the slace instance

There is not logs as far as I can tell regarding the issue on my machines.

Hi Christian,

Did you checked --INSIDE-- each file to see if they are encrypted or not ? The default server-side crypto engine does not encrypt metadata (like filename). Only the actual data inside the file is encrypted. And even for that, you will have a pretty long header of what looks like empty space at the beginning of that file. If you see this, then your file has been encrypted server-side.

Here, to achieve what you are looking for, I mounted an NFS share from my FreeNAS server inside my docker host. That directory is then remapped inside the container. That way, the container is not even aware that the storage is not local. Even a fully compromised container will not be able to do more about it. Once encrypted files are saved in the FreeNAS server, that one does ZFS replication to a second FreeNAS for redundancy. That one is not physically secured to the same level but thanks to that server-side encryption it is not a problem.

No, I put some dummy text (Hello world or so) in the text file using master. Then, on slave, I was greeted by exact these words. I could have made MD5 sums but this is definitively not a working encryption. I did not only check the file name.

So, am I understanding you correctly that you have the setup FreeNAS as storage backend, then a mount to the docker host and on that you save the encrypted files from Nextcloud? Or are you cascading the Nextcloud instances like I try to do?

Indeed that’s what I did…


Indeed, but your are still missing one step. On the docker host, the NFS share from FreeNAS is mounted as /mnt/freenas. When I deployed the Nextcloud container, I re-mapped the /mnt/freenas directory to /var/www/data, which is Nextcloud’s data directory.

So Nextcloud is writing “local” files in its own data directory.

When I did some testing with Federation, I did chained from one Nextcloud to the other. The thing is in that configuration, the first Nextcloud decrypts everything before sending it to the second Nextcloud. Server side encryption is only “inside” Nextcloud. When you cascade a second Nextcloud, the content from the first will be decrypted before being sent to the second because that second Nextcloud is not “inside” anymore from the first one point of view.

Here we are getting close to my idea. What is in your example the first and the second NC instance? Is the second instance mounting storage externally from the first right? And the first one has server-side encryption enables, am I correct?

If that is correct, my plan was teh very same but enable encryption on the second. That way, the first sores just binary blobs (encrypted data but ist not aware of it) and the second one just uses any external storage and transparently encrypts/decrypts while writing.
This should work or am I misled?

I will have to setup a complete testing instance to see if this is working and without any additional traffic to keep the logs clean and crisp to easily filter out the relevant parts.
Any additional information/suggestions/points are highly welcome.

Nope. Each one was fully independant, with its own storage. That’s what Federation is…

What you need is an HA solution, not two independent Nextcloud. In an HA, you need HA Storage, an HA Database (a cluster) and an HTTP load balancer. Start by doing all of that in clear text. You will play with server side encryption after that…

What do you mean by HA? High availability?

Why would I need high available storage/database? This is something completely different in my understanding than I need…

I am a bit lost and curious, what your thoughts on this are.

Hi Chris,

Indeed, HA means High Availability.

I understand that what you described is the need to squat some HDD space from your low trust server to the benefit of the higher trust one. Because what you need is to use the second server as a backend, you should not interface with it using a frontend. So use your second server as an NFS server or something like that (like I do), and you will be in business.

If you insist to interface with that second server using a frontend, then to have these two guys in HA just make sense. They both server the same content, associated to the same users, …

Your problem as of now is that you are using a Frontend interface to play the role of a Backend interface. That confusion is what also confuses the way server side encryption is meant to be used. Its role is toward the backend but you linked it with a Frontend. Seeing a Frontend, server side encryption decrypts the data before sending it. That is normal, as expected and as it should (actually, as it must) be.

I am sorry but this sounds so strange to me.

Just to explain my setting a bit so it might become more obvious what’s the culprit. I am working for a charitable club (that is financial is critical). There is already a Joomla web page running. Now there is a nextcloud storage needed as well to exchange data.

The webspace package booked has quite some spare storage size so ideally, the two parts (nextcloud and Joomla) could run off the same storage easily. However, it is a pure webspace package with PHP/MySQL. No root server or other fancy stuff.
The data in the NC installation might be personal and critical, it must not be compromised. As I am trusting the NC codebase much more than the Joomla codebase (with all the involved and installed packages), I fear an attacker could gain access to the joomla page (by whatever backdoor/bug/…) and access the critical NC data.

To overcome this, I had the idea to obtain a very small webspace package or VPS that will run the NC itself. The NC should encrypt the data and store them (in encrypted form) on the big web space. If any attacker might get to hack the Joomla instance, he might only get access to the encrypted files but not the real file content.

One way to go is to mount the big server’s storage somehow into the small VPS. Then nectxloud would not even get to know that there is another server involved. Unfortunately, I do not have an SSH/SFTP access of high speed to the big storage. Also, no other protocol I am aware of that can solve this issue (FTP might work but I do not know if it will work using a fuse mount or so).
Long story short: I cannot easily mount the big storage into the small VPS.

However, I think this is the solution with HA you were thinking of somehow. The problem is: I do not have a low trust server but a low trust webspace which is not as capable as a root server.

As far as I understand things correctly there is a way to transparently insert external storage to the Nextcloud instance. I have used it in my personal instance already to include a folder from the server’s file structure to the cloud. However Nextcloud provides a whole set of possible drivers for external storage like SFTP, FTP, WebDAV, SMB/CIFS, Amazon S3, … All these allow the admin to configure the NC instance to store the data (partly) off-server somewhere.

Now we are getting closer to the hairy part (at least from the linguistic part). To have read-write access to the bit data storage, my idea was to install an NC instance there as well. The NC on the small VPS/webspace can then include the big storage NC instance as a WebDAV or optimized nextcloud external storage. This is already working in my testing environment.

The idea of the server-side encryption is to protect just this type of outsourced data. So, I would in my understanding enable server-side encryption on the small VPS/webspace. This should cause all the write operations to any backend (be it Amazon S3 or a WebDAV storage) be encrypted. This is mainly intended against data leakage on these external storages, otherwise, the server-side encryption does not provide any security benefit.

So, you are right, the NC instance on the big storage has a user frontend. But has as well an API frontend called WebDAV. The instance on the small VPS is pretty capable of using this API to write remotely.
I cannot link the encryption with either frontend or backend. It is always associated to a whole NC instance. In my case the instance on the small VPS. This instance has a frontend (web and again API) and a backend (local native storage in nextcloud and the remote storage in the other nextcloud on the big web space). If all data is encrypted before been sent to the backend this should work as I intend it.

This paragraph is a disturbing one. How can encryption decrypt non-encrypted data? Assume a user wants to store a file in such a location on external storage (be it Amazon S3 or NC WebDAV). Da data comes into the frontend unencrypted (ok, it was transport encrypted by the HTTPS layer but forget about that). Then the NC instance on the small VPS will decide where to store and selects the big storage external location. The encryption should now encrypt the data and pass the result to the backend (which will open an HTTP(S) transaction to the second NC instance).

Or are you referring to the reading process? Then of course it must go vice-versa. But after writing some data using the chain described above, the data should not be readable when accessing the raw data on the second instance.

Can you not interface using a webdav sharing instead of a native full fledge Nextcloud ? A basic webdav sharing will act as a simple storage backend, like you need…

Nextcloud contains a WebDAV server API. The webspace itself is not capable of WebDAV as far as I know. I will have to ask it but I doubt it. Strato (the provider involved with the big storage provider) has its own storage solution (HiDrive) but this costs extra of course.

This is exactly my problem. If you read the conversation again you see that I was able to merge the backing storage nextcloud into the nextcloud instance in question but no encryption is done. This is against all intuition and every documentation I found so far.
I wanted to know if anyone has had a similar problem otherwise I will have to open a bug issue on github.

Again this looks like normal because you are chaining two frontends. You need to access your storage as nothing more than what it is : Storage.

So you may try to use Nextcloud webdav raw interface. For that, do not mount that storage as being another Nextcloud. Mount is as a dummy webdav storage. In this config, you should achieve what you are looking for. But again, it is normal for Nextcloud to decrypt its content when sending it to an end user or another frontend.

I just found this issue on github. It might be related to this problem here.