LAN sync please

but that could also happen if the internet is disabled for a long time. The idea is, switch to the LAN, if you are in the same LAN the nextcloud server is. If you have Gigabit LAN it would be so much faster.

Personally, I just add my domain name to etc hosts on all of the machines on my LAN, pointing to the servers LAN IP. It’s not really a novice user friendly process, but it’s really simple and works perfectly for me. Syncing is lightning fast.

2 Likes

And how do you do this on windows and android?

imho it should be something end users can configure on their nextcloud client.

You do it with proper DNS.

All of my installs on resolve an internal IP address when on the LAN. So the client never tries for the public ip,

3 Likes

2 different topics are mixed here.

The part with the lan access should be covered via nat loopback of your router.
When you set e.g. a dyndns and access this url from intern, your router should recognize and do a direct connect without going extern.

The requester is asking about 2 clients syncing directly - without acessing the server for the filetransfer.
This means that a client would connect to the Server;
Server confirmes that the file is known and reports its parameters like changedate;
Client searches internally for other client if they have this version;
The clients communicate directly;

In short: the clients need to open listener-ports and act as servers also.
I think a little overkill

1 Like

you are right, his question is about peer 2 peer over lan. This should be discussed in this thread: P2P Seeded File Sync

But the idea switching automatically to LAN sync with the server if the client is on the same local network as the server, is also great in my opinion.

1 Like

Yes, I agree completely. I was just talking about a quick hack that works well for me, and would for the OP, not how we should expect everyone to do it.

1 Like

But you cant simply modify the /etc/hosts from any application.
Even if you manage to solve the security/authorizations, there would be a lot of validations required not to mess anything up, because there is hardly any file with more immediate impact

you don’t need to modify the /etc/hosts. The end user has to select the wifi which is the one the server is also connected to. Then the client checks via ping whether the server is really in that network if yes all is good.

I hadn’t seen the p2p thread which may well be the same thing.
and yes my post was predicated on server NOT being on same lan, if it were would not be an issue just due to lan speeds.

All I know is that I open is once with sudo nano /etc/hosts add my servers LAN IP and domain name and after I save that it works like a dream for me. YMMV

I would also like this feature.
As far as I understand it the way it works on DropBox is:
Client 1 uploads files to DropBox server (external to the LAN).
Client 2 connects to the DropBox server, see a file is there, but sees that it is also on Client 1 and grabs it from Client 1 as it is quicker than downloading from the server.
The file being on the server initially I think is to prevent conflicts.

I find this is most useful when you are adding a new client to the system. Rather than downloading GBs of data from a remote server, it grabs them relatively quickly over the LAN from an existing client.

LAN sync (in the way DropBox does it) is not an issue with a locally hosted OwnCloud server.

5 Likes

But this way you have have to remove/comment-out the line everytime you leave your internal network.

What I’ve been using for years is dnsmasq - it’s a caching dns server; you simply configure it to use your current dns servers upstream and add in a couple of manual entries for your domains. Then in order to have each client that connects via dhcp use your internal dns server, you simply set the dns server ip’s in your gateway/router to point to your dnsmasq instance; better yet if your running something like dd-wrt or pfSense you can install dnsmasq directly to the router.

1 Like

bingo.
not sure why everyone went off on a tangent but when server is on same lan this is not an issue,
this is for when server is on WAN like a dedicated/vps off of lan.

Yes, exactly. And, for example, when the Cloud server is out on the real internet (including e.g. hosted service). It would be very handy if multiple client computers “at home” can pick up a new (often big) file from the “at home” client that just created/modified it, rather than downloading it (a separate time for each “home client”) over and over.

2 Likes

I would like to add another (extending) use case:
I have a local Owncloud installation on my NAS, Raspi, Home-PC, that has access to all the terabytes of local disks laying around, maybe not always available. Then I have an Owncloud in the web (root-server, whatoever), always available, with a good connection (upload & download) but limited space, that serves all my phones/tables/PCs etc. with files/contacts/calender/etc. The home-Owncloud is hooked into the web-Owncloud via federation, to have everything accessible. In the Web-OC I can decide which data I want to have always available there (similar to the checkboxed-tree in the Windows-Client).
I don’t want to maintain to sources/client-folder in my sync-folder, one file, one URL, independent where it resides (this should be transparent to me as a user/application program). If the home-OC is not available obviously I cannot connect to the content, if I did not check the box to mirror it on the web-OC (which is included in a completely different User Story as well).
But if the home-OC is available, I would not like to load the file (e.g. a movie) from my disk through the home-OC to the web-OC (with limited upload bandwidth) and back again to my PC 3 meters away. The routing shoud find the local LAN shortcut.
(I guess this needs to be solved mainly as a proxy/routing issue. But I am not an expert in this. The Sabre guy who was on the OCC 2015 in Berlin might be knowledgeable enough with this topic to help.)

Take a look at UnionFS. You can mount your Home-ownCloud with dav2fs on the Cloud-Server (e.g. /mnt/webdav). Then you create a “proxy-folder” (e.g. /mnt/proxy) which has to have the same folder structure (in parts) as /mnt/webdav. You could use rsync or something like that to put files that should be cached into /mnt/proxy. Then you combine those two dirs into one dir (/mnt/union) with UnionFS. Finally you mount /mnt/union into ownCloud using externalStorage.

UnionFS automatically checks if a file is available in /mnt/proxy and uses this file. If it isn’t there, it uses the file in /mnt/webdav (from your home-owncloud). UnionFS will always prefer /mnt/proxy over /mnt/webdav. If your home-OwnCloud ist offline, the files in /mnt/proxy are still available (in the same ownCloud folder, because only /mnt/union is mounted inside owncloud).

You should definitely read the docs of UnionFS before doing this, but I think this solution could fit your needs quite well

2 Likes

Is there any chance we see local network sync in the near future as a regular NC feature?

1 Like

Not sure about Lan Sync, but feeling like I am the only one who has noticed or taken interest in GSOC 2017.

Maybe post some ideas there?

I wouldn’t call it Lan Sync, for me that is two Nextcloud servers and the call should be for server replication.
Its a standard split DNS where the lan subnet is a different IP & Server to the replicated cloud public IP & Server.

It would work transparently depending on where you connect.

I can not understand why anyone with a busy lan would chose to work on a remote repository when most of us are on asynchronous broadband connections when Gig+ synchronous ethernet is easily achievable on the lan.
You would have a local server and that would replicate out.

Personally unless I had multiple remote lans I wouldn’t bother, but if you did have multiple remote lans it could be worthwhile.
In my scenario I new work would be on the lan and have installed the server on the lan and access is through a static connection on the wrong end of a asynchronous broadband connection.
It doesn’t matter really though as remote work is the exception and the client app syncs local files, yeah its slower, but its the exception.

Unless you have multiple remote lan requirements I would be more likely to say you have installed Nextcloud in the wrong place.

If you want an example of lan sync, without even needing a central server, see syncthing. I wonder if the nextcloud people could combine forces with them in some way.