Nextcloud version (eg, 20.0.5): 184.108.40.206.1.1.0 (I have upgraded since posting.)
Operating system and version (eg, Ubuntu 20.04): Docker 24.0.5 (official Nextcloud image) on ArchLinux
Apache or nginx version (eg, Apache 2.4.25): Apache 2.4.57
PHP version (eg, 7.4): 8.2.10
The issue you are facing:
Is this the first time you’ve seen this error? (Y/N): This isn't an error.
Hi, as I understand it, there is no support for virtual hosts built into Nextcloud, so in order to accomplish what I am after under my existing configuration, I will need to spin up another Docker service to run concurrently with my existing Docker service and configure haproxy (also a Docker service) to route the HTTP traffic as needed.
I have a few questions about this before I get started:
Could my data store be shared among the 2 domains? If I look at the root directory, I see directories corresponding to usernames, a files_external directory, and an appdata_* folder that presumably holds application settings. It seems this could work for me because some of my users could benefit from having “synchronized” directories in the 2 domains, available under the same username, and I know that Nextcloud generates a random appdata string so there shouldn’t be any problem there if the 2 domains have 2 distinct random strings.
I currently have a publicly shared URL for a particular directory. I would like to configure haproxy to 301 redirect this URL to a distinct publicly shared URL under the other domain. Would there be any issue in doing so?
Are you trying to create a redundant setup? or setup two instances with different domains?
You cannot share storage between two instances, the files correspond to entries in the database, just syncing files will not make them show up in the NC instance. You can share files between instances with federated sharing.
Since my initial post, I’ve noticed that cron.php (which I need to run as a separate Docker service) was having issues due to having configured php_memory_limit in my nextcloud service (whose environment nextcloud_cron inherits), so I have spent most of my time trying to address that (turns out setting php_memory_limit = -1 is the trick to getting it working again).
At any rate, what I am trying to set up is 2 instances of distinct domains. The “syncing” of files in the data directory as I described would have just been a bonus. I will look into federated Federation (apparently) sharing before going any further. Thanks for the pointer.
One question that comes immediately to mind: will Federated shares duplicate all of the shared data or does one Nextcloud server expect the definitive copy to be available and online at access time, in which case that Nextcloud server just routes the request to the other? And if the latter is indeed the case, is there caching?
Quick of topic comment, setting the memory limit to unlimited (-1) is a bad idea. The recommended memory limit is 512MB. This limit applies per page request, so you can see why unlimited would be a bad idea.
As for the distinct domains, as long as you don’t need different branding, you can actually configure NC with multiple domain, that setting is in the config.php.
Don’t quote me on this, but I believe a federated share, save the data/content back to the original server. But I am not very up to date with the the exact working of federated sharing as I don’t use it myself.
Thanks for mentioning. I’ve removed PHP_MEMORY_LIMIT entirely from both of my Docker services, nextcloud and nextcloud_cron. (If it isn’t clear, it is necessary to run cron.php in a separate container.) Doing so should default it to 512MB, and both services seem to be fine with that, which should surprise exactly no one. but cron.php keeps failing with memory exhaustion errors, a workaround for which seems to be passing -dmemory_limit=-1 when running cron.php.
I’m not sure why I had that in there to begin with. Does it have any effect PHP_UPLOAD_LIMIT at all?
I think I have an idea what you mean by “branding,” and I don’t think I need to brand each Nextcloud instance distinctively, but I’m not 100% sure. Can you clarify a little bit about what you mean by that and what I should be looking for in config.php?
It seems obvious to me that having one server act as a proxy to a second server would be the simplest to implement. Nevertheless, I can see having both caching and duplication as configurable options being very useful features.
No the memory limit and upload limit are different. The memory limit is the max memory a request/process can use. The upload limit is the max size of a POST request, when files are being uploaded. So setting this to 1362MB (1024MB + additional 33% for base64 encoding bloat) would mean you can upload a single 1GB file or 10x 100MB files in a single POST request. After this the files would need to be chunked (split up). I have mine set to 2816MB.
Okay, I figured as much, but PHP is not my first language.
That’s what I thought. Thanks for clarifying. It seems I am going to need to have separate instances.
I think what myself and @just both describe is effectively proxying (ServerB acts as a proxy to the data on ServerA via WebDAV). And I don’t think it was meant to imply that it isn’t a good solution. I think Federated shares would be just fine for me, since both instances would be sharing the same hardware.
It seems to me that the statement “neither of these is supported” is in reference to caching and duplication (actually, I think the word I should have used is replication, not duplication). Seems to me like HA redundancy could be accomplished with something like glusterfs, but I think that’s more complexity than I need.
This is all very straightforward. Thanks for the assistance.