I have a what appears to be a simple request that I think could make quite a big QOL improvement for certain setups.
The Problem:
You have a nextcloud instance that you access locally, for instance over ethernet. So your nextcloud client points towards your nextcloud instance at for example 10.0.0.2:88
However, when you use the desktop client or explorer context menu to create a link to share with a colleague or client who is remote, they will not be able to use it as they are not on your local network. The link will look something like this:
The current solution is to log into my webui in nextcloud, which is setup through a domain, and create the link from there. The resulting share looks like:
Existing solutions:
Something I am currently trying to figure out is using networking hacks to try and forward a url to the local instance of nextcloud. Not a user friendly option.
A Better solution:
What i’ve noticed is that the unique identifier in this case ‘ERcKJL6MwMTAcxk’ is the same, regardless of where the share was created. All it would take is for the nextcloud client to substitute a URL stipulated by the user, and you would have a working remote public share available straight from your pc. This could be an option in the settings menu, something like ‘enable URL for remote shares’
I imagine there are a lot of people with a similar setup who would benefit from this. any thoughts?
You can setup a local nameserver which points your domain to your local IP and connect your client instead to your local IP to the domain, then it should work.
you started a topic in development category. This category is intended for active developers of the core or apps in the Nextcloud ecosystem.
From the description in your topic, it is not clear if you are seeking help and advice about a concrete problem you have or you want to actually develop the corresponding solution.
Please specify explicitly the required information to help you best. These are:
What you want to achieve
What you have done so far
What is failing
What you expect from the forum community
Without additional information the community members cannot help you in an efficient manner. Please keep in mind that the help here in the forum are mostly based on work of volunteers and thus it is just fair to reduce the burden on them.
If you accidentally posted in the category, just give a hint and a moderator can move the corresponding category.
Thanks, I suppose my post was just a general feature request, so perhaps this is the wrong category. My intention was just to highlight a feature that could be useful to many users and perhaps not too difficult to implement at some point. If there is a more appropriate place to post feature requests please let me know.
I have actually managed to get this work with a networking solution: using a combination of nginx to point to the correct port and pihole to DNS the domain query. This is without TLS however.
It did take me a couple of days to figure out (I am just learning these things) hence the suggestion of a feature that would be user friendly for less technical users.
I assume that access from the Internet works correctly and also with TLS. As I have described, access from internally should also work in exactly the same way. Perhaps the terms NAT Loopback or NAT Hairpinning will help you. Maybe you can configure it on your router.
Thanks, for now my solution works I can live without TLS on my local network, though I will look into certificate solutions. Access from WAN is of course via TLS.
Change domain URL in shared links · Issue #27240 · nextcloud/server · GitHub - here is a GitHub issue of the same nature, from 2021. What’s more, someone actually implemented this feature in their own version, 21.1 - but it was not taken forward into the main trunk and they did not submit a pull request.
A pity, as I think that implementation is a lot more elegant than having to add a bunch of extra services/networking.
Well, I have a similar problem in one instance. But the problem is in fact a badly configurable router. It routes the traffic once through the ISP by default when using the WAN’s IP. As the router has some spoofing protection installed it rejects the connection “to itself”. Having full control over the DNS solved this, though.
Well, I find it much more elegant to be able to use the same URL from anywhere. And for a normal user it’s definitly still easier than your proposed solution. It’s just more work for you, the admin
By the way, how would you handle the different URLs on mobile apps? Always log out and log back in again with the other URL when the device leaves/rejoins the local network?
DNS is not a hack, but your proposal would be. Also, with solutions like Pi-hole or AdGuard, a local DNS server is actually relatively easy to set up and maintain, even for less technical admins, and as a bonus you get network-wide ad blocking. So I’d say win-win-win!
Again, the less technical users wouldn’t have to do anything, you, the admin, would
I can see your point here but let me elaborate on the use case. The point of this would be for systems where a machine, or multiple machines on a network are physically connected to a nextcloud instance via Ethernet. Why would you want to do that? If you are sharing large files with clients and want a quick sync to your nextcloud. e.g. sharing video edits with clients in the 500Mb ~ 25Gb range. This is primarily about speed. No one is is doing that on mobile. And no one is connecting to their server by ethernet from mobile. So its not functionality that would be required for mobile.
Unfortunately Pi-hole does not allow forwarding to a specific port, so as I mentioned above, I’ve had to use this in combination with nginx but am still struggling with TLS. I’m now looking at other options such as Caddy. If you know of a internal dns that can forward to a specific port and deploy certs then I’m all ears.
I get where you’re coming from but I think you underestimate the number of nextcloud admins that are not running enterprise solutions but are actually the admin of a small deployment where they may be the only user. Or perhaps they are an admin for a small or medium home user or business who are not comfortable administering their own dns/routers etc. Unless I am mistaken and this forum is only for enterprise users? An admin can be a user too
Well, I had little problem installing and using my nextcloud instance on docker and have been using it for years now, along with about 10 other containers that I also administer. The documentation and guides are excellent. But this particular issue has me stuck for a while now. But If I find a solution I will write up a guide, for those less technical users.
A good feature request is not an insult to the devs or a way of being ungrateful, but rather a way of contributing towards improvement:
it looks like there has been some attention to it recently. I honestly think this will be implemented at some point even if many of you think it is totally unnecessary.
Yeah, exactly my use-case that I wrote earlier. I wanted quick (local) access to the data without the need to rout everything thought the internet (be it via DSL/broadband or via mobile).
You can say “hey, this is mission-critical. I do not want this data to be in the internet” and make an air-gapped system. Then, you do not have to worry about any external access anyways.
Once you open it to the public internet, it makes sense to use a common name to access the server (no IPs).
Using a domain name is also more secure as the IP access cannot be protected against spoofing and Man-in-the-Middle. So, think twice before you do such a hack. It is simply bad practice.
You will not find this. For no DNS solution. The DNS does only map from host name (nextcloud.example.com) to IP address. Adding the port allows to run multiple serviecs on one machine. For example, you could run HTTP (on port 80 by default) and HTTPS (on port 443 by default). Most programs are configurable on which ports they listen.
What you typically do: You use the default ports (80/443) for your services. In nginx, you configure a reverse proxy. It will look at the requested URL (and the headers of the request) to forward it to the correct terminal service. That way, you can run two services (Nextcloud on port 8080 and Roundcube on 8081) and redirect all requests on nextcloud.example.com to port 8080 and to roundcube.example.com to 8081.
I am myself no enterprise user but private user. However, there are reasons, why it is done in professional context how it is done. (OK, sometimes, it is grown structure but this is rather stable, believe in me)
You can also configure, what are the ports/network cards to listen for connections. So, you could both allow external access (using a forwarded port) as well as local access with high speed.
This is not about insulting. It is to avoid (unaware) admins to shoot into their own foot. So, it is not about ignorance but as protection: Assume it was implemented and some data loss/disclosure/revealed happened. This would have a massive negative impact on the reputation of NC. I already hear people crying out about this.No one will actually take the time to read and understand that this was caused by a misconfiguration (which your suggestion simply is, sorry) or the context for the user.
I’m aware of that, I don’t use Nextcloud professionally either. It was a bit provocative on purpose
However, if you’re running your own servers, you’re definitely not a normal user anymore, and that means you have to learn a few things that normal users don’t have to deal with, including some basic networking knowledge. Nobody expects you to become a networking expert, which would involve a lot more than just running a home user/home lab product like a Pi-Hole
I don’t use it myself, but from what I’ve heard and seen it’s certainly a good option. If you prefer to have a GUI to manage your proxy hosts and certificates, NGINX proxy manager might be worth a look as well.