Nextcloud AIO fresh installation: only one user can log on at a time

Nextcloud version (eg, 29.0.5): Nextcloud Hub 8 (29.0.1)
Operating system and version (eg, Ubuntu 29.04): Debian Bookworm
Apache or nginx version (eg, Apache 2.4.25): Don’t know using AIO
PHP version (eg, 8.3): Don’t know AIO

The issue you are facing:

After setting up a fresh Nextcloud install, the only way to access nextcloud successfully is by clicking the “Open your Nextcloud” by clicking the button on the local AIO page. After that Nextcloud can be opened as normal from any computer in my local network (I haven’t tested beyond that) using the normal URL. Once I do, no other computer in my local network can connect to it. If I stop the apache container, log into the AIO page on a different computer, and then restart the apache container, I can then log in to nextcloud on that computer, but it will cause the previously working computer to now time out.

Is this the first time you’ve seen this error? (Y/N): Y

The output of your Nextcloud log in Admin > Logging:
Too big to post, seeing a lot of ConnectException cURL error 28: Connection timed out after 45002 milliseconds (see for Failed to fetch the Collabora capabilities endpoint: cURL error 28: Connection timed out after 45002 milliseconds (see for In there, from the richdocuments app.

Don’t know how to get the other logs since I’m using the aio. Apache docker container doesn’t have any errors in it, but it is unhealthy (the nc -z to port 443 from the domain times out as well, and any netcats from the pc where nextcloud is currently somehow working will also time out)

you use a plugin to connect to external servers. The problem of handling the event of unavailability of connected external servers is known and modeled. You can take part in the discussion of solution options.

Having seen this issue I decided to rebuild by aio containers and turn off NextCloud office. The error messages no longer appear, but the problem I was having persists: All attempts to connect to my domain and port 443 time out, except for one (sometimes).

I should also mention that nc -z localhost 443 works just fine.

Are you using a reverse proxy?

What does your domain actually resolve to internally? A private IP address? Your public IP address?

Are all of the devices you’re testing from internal? What happens if you connect from the outside?

Maybe there is something weird about the port forwarding config on your external router?

P.S. The Collabora issues suggest you have a connectivity problem in your Docker environment or a local DNS problem. I wouldn’t be surprised if they’re related.

Not using a reverse proxy.
I realized my initial post is a bit vague on what I’m actually seeing, so I will add some other info.

When testing on an internal machine (haven’t tested anything externally yet):
nc -z [local address of server] 80 is open
nc -z [local address of server] 443 is open
nc -z [public ip address] 80 is open
nc -z [public ip address] 443 connection times out

This looks to me like an ISP blocking my ports upstream, even though I’ve forwarded them and port checkers online say they are open. I can’t confirm until monday when I’ll call them about it, but I have doubts because why would they block 443 but not other commonly blocked ports like 80?

As regarding to what the domain resolves to internally, I don’t know how to check for that, or really what you mean. I do know that dig [my domain name] correctly shows my public IP in the answer section.

Just tried a fresh install again, I should mention that on both installs I have to skip domain validation because it times out after 10s. I want to add however, that when I was attempting to complete domain validation this time I noticed that in the logs it would always time out after 10s, but `nc -z [my domain] 443 was completely successfully, but after about 20s. I don’t know why it’s taking so long, but maybe the domain validation would complete if it waited longer? Regardless, same issue is happening. Could it be SSL related? I can visit my domain at port 8443 just fine, but I don’t know if I have to do anything else with SSL on my end since it’s never explained anywhere.

You can do this

openssl s_client -connect domain.tld:443

And if you do nothing but just wait until this check is completed, then it will take a minute or two. I didn’t measure the time here, but I’ll have to wait. Generally speaking, you are not waiting, but the instruction being executed is waiting until you stop waiting.
In order not to irritate programs with long waits, you need to give something to this procedure or command, or rather to the request.

echo "HEAD / HTTP/1.1" | openssl s_client -connect domain.tld:443 -CAfile /etc/ssl/certs/ca-certificates.crt -crlf

Here I succeeded without waiting for completion. But if you look at all these actions of mine, I did the same thing: I checked the domain (domain.tld) certificate.

Running that first command on an internal machine gives me this:

40F7A825BE7F0000:error:10000067:BIO routines:BIO_connect:connect error:../crypto/bio/bio_sock2.c:116:

Not sure what it means or what I have to do.

Update: I got a friend to try out my domain from an external ip, and he was able to connect just fine, even while I was connected. So this seems like a local issue.