Federation in private linux contatiner

Hello,

I have two containers linux (debian), each with installed nextcloud based on apache2 / MariaDB.
Each container has a private ip, and the only way I can connect remotely to the container is via ssh tunnel: eg.
“Ssh -L 8080 localhost: 80 root@x.x.x.x -p xx” and “ssh -L 8888 localhost: 80 root@x.x.x.x -p xx”.

I also set the federation settings as official documentation, and specified the two servers in the exceptions file (config / config.php).

to share a file I perform this steps:

select the File -> share -> user@http://x.x.x.x/nextcloud

I need to specify “http” because I haven’t any ssl certificates.

The receiving side receive the notification properly and asking me if I want to receive the file, but when I press “accept” gives me this error:

“Failed to perform action lost connection to the server”

The problem is that in the confirmation notification the sender field has this value:

“Admin@localhost:8888/nextcloud”

which I think is wrong, because should there be something like "admin@ip_sender_container/nextcloud.

localhost:8888 is the address I use for ssh tunnel as previously explained.

So in summary, sending side can send the file correctly, while the receiving side receive notification of the new file, but with return address localhost, instead of sending server ip.

How can I configure the two federated server to enable you to share files?

Thank you in advance, greetings.

Edit: with tcpdump listening on port 80 in both hosts, i see the correct ip of the 2 containers:
eg: 10.10.10.1 > 10.10.10.2
10.10.10.1 < 10.10.10.2

Can you provide an output of your config.php file? Ensure you remove salts, passwords and other identifiable informaiton.

Hi,

I ran into a similar problem. In my case it was because the system didn’t trust the certificate of the other end. But as you don’t have a certificate, that’s not your problem.

What could be your problem is that the federated sharing module is expecting to see https instead of http. I’ve seen it in other testing scenarios where nextcloud starts requesting https while it was used via http.

You can easily see what’s going on by running ngrep instead of tcpdump. Something like this should suffice:

ngrep -ed $interface ‘port 80 or 8888’

You will eventually see two things. One would be that one container tries to access the other one on port 8888 which is not possible if you haven’t configured port 8888 to accept the requests.

The other one could be the start of an https handshake looking similar to this:

T 127.0.0.1:34004 -> 127.0.0.1:80 [AP]
…X.n…O…Mc…=…Kv…L.<…JnRG…r.,…$.s.+…#.r…0…(.w./…’.v…{…5.=…z…/.<.A…}…9.k…|…3.g.E…Y…
…#…http/1.1

On the other hand you can easily fix this by using SSHs advanced port forwarding features:

ssh -L $local_interface:$localport:$remote_host:$remote_port -l $user host

SSH -L 127.0.0.1:80:127.0.0.1:80 -l zeUser container1.containe.rz

that way you can just browse to http://localhost and you’re fine.

But be aware that - depending on the os you’re on - binding to port 80 requires administrative/root privileges. So you can only run the above ssh command as root or via sudo.

In case you’re on a linux machine another - and imho better - option is todo the following:

Let’s say for container 1 you use the local port 8881 and for container 2 you use the local port 8882. And you forward it to port 80 on container 1 and container 2 then your ssh commands would look like this:

ssh -L127.0.0.1:8881:localhost:80 -u $user container1
ssh -L127.0.0.1:8882:localhost:80 -u $user container2

Now we do a bit of magic and give iptables something todo :). Let’s say for container 1 we use the fictional ip address of 127.0.0.11 and for container 2 we use the fictional ip address 127.0.0.12. Fictional in this case means that those ip addresses are really used later by you in your setup but they don’t really exist. You won’t be able to ping them or do anything else.

So now with your two ssh connections open do the following:

Start a root shell and enter this in iptables

iptables -t nat -A OUTPUT -d 127.0.0.11 -p tcp --dport 80 -j DNAT --to-dest 127.0.0.1:8881
iptables -t nat -A OUTPUT -d 127.0.0.12 -p tcp --dport 80 -j DNAT --to-dest 127.0.0.1:8882

This redirects output going to 127.0.0.11 port 80 to 127.0.0.1 port 8881 and output to 127.0.0.12 port 80 to 127.0.0.1 port 8882

That should fix the port in the NextCloud request problem completely and you don’t have to run the ssh commands as root once you save the iptables rules to survive reboots (netfilter-persistent/iptables-persistent are the packages on debian based distros).

[EDIT]

Totally forgot: You can now access container 1 via curl http://127.0.0.11 and container 2 via curl http://127.0.0.12 or any other browser you like.

[/EDIT]
Remember this is done on your machine not in the containers.

Cu

1 Like

Thank you all for timely responses, tomorrow I will try, I keep you updated.

UPDATE:

ok maybe I did not understand, if I send a file from container 1 to container 2, I still see in notifications

“You received file.txt as a remote share from user@127.0.0.11”.

and when I press “accept” gives me the same error as before: “Failed to perform action lost connection to the server”. Again i think it should be something like “user@ip_container1”.

I followed these steps:

on my pc:

“Iptables -t nat -A OUTPUT -p tcp -d 127.0.0.11 --dport 80 -JDNAT 127.0.0.1:8881” for container1 and

“Iptables -t nat -A OUTPUT -p tcp -d 127.0.0.12 --dport 80 -JDNAT 127.0.0.1:8882” for container2.

Then:

“Ssh -L 127.0.0.1:8881:localhost:80 user @ $ (container1_ip) -p $ (container1_port)” and
"Ssh -L 127.0.0.1:8882:localhost:80 user @ $ (container2_ip) -p $ (container2_port)"

This however is the config.php file of one of the two containers (they are very similar)

‘Trusted_domains’ =>
array (
‘Localhost’,
‘Nextcloud1’,
‘Nextcloud2’,
‘127.0.0.12’,
),

I had to put 127.0.0.12 otherwise i’m not able to access from my PC browser to the container, I think it is wrong to have to put this ip.

To send i use the following address:

“User @ http: // private_container2_ip / nextcloud”. <-- this is correct because container2 receives notifications.

The solution is use iptables + configure the /etc/hosts file.

Thank you.