you posted in the development category but I suspect you are not willing to write code but use existing solutions. Then, you should go to support. I will move your post in a minute.
similarly, you should be able to connect with a local AI server. It just needs to “talk” OpenAI API (which is quite common). However, the NC app only supports one endpoint at the moment. So, current state is either Gemini or local server.
You could use a llm proxy and I had another (proprietary)absolution in my mind but I do not remember the name anymore. Would have to do some research in my browser history .
just from (personal) interest: the llm2 app runs the LLM on the docker instance associated with the nextcloud ExApp (typically localhost) but with the open integration, I can use any reachable machine remotely. Or did I misunderstand this?
with the openai integration, I can use any reachable machine remotely
yes
the llm2 app runs the LLM on the docker instance associated with the nextcloud ExApp
the llm2 app runs the llm in docker where your app api deploy daemon is set up. the nice thing is that you can have as many of these as you like, including on remote machines.
Ohh, interesting…
So, you can install multiple apps of type llm2 for running on multiple machines?!? This sounds strange, to be honest. I do not know how this would technically work. Could you post a link or something similar?
Ah, no sorry. You can have multiple deploy daemons, e.g. one on localhost and one on a beefy GPU-enabled machine where you run llm2, but you cannot install the same app multiple times.