Setup AI Assistant using Google Gemini LLM by calling API key

I want to call Google Gemini API key for NextCloud AI Assistant, but failed.

  1. Anyone can enable NextCloud AI Assistant using API (e.g. Gemini, OpenAI)? Can anyone provide the steps with screen captures for reference?
  2. Another question is I have a AMD GPU server, can it be used for NextCloud AI Assistant if local install LLM model is required?
2 Likes

Hello @Anthony1,

you posted in the development category but I suspect you are not willing to write code but use existing solutions. Then, you should go to support. I will move your post in a minute.

Generally speaking it was literally a minute to dig up this link: https://share.google/YTf4A1HrfMVJzMZUQ

So, yes, you can use the OpenAI app OpenAI and LocalAI integration - Apps - App Store - Nextcloud should be able to to connect with Google Gemini.

similarly, you should be able to connect with a local AI server. It just needs to “talk” OpenAI API (which is quite common). However, the NC app only supports one endpoint at the moment. So, current state is either Gemini or local server.

You could use a llm proxy and I had another (proprietary)absolution in my mind but I do not remember the name anymore. Would have to do some research in my browser history :wink:.

Chris

2 Likes

Hi Anthony,

you can also use the llm2 app which comes with llama.cpp and can run any gguf model. It comes with llama 3.1 8B out of the box. See here for more information: App: Local large language model (llm2) — Nextcloud latest Administration Manual latest documentation (generally the whole AI section in the admin docs is worth a read :wink: )

Cheers

Marcel

1 Like

Hello @marcelklehr,

just from (personal) interest: the llm2 app runs the LLM on the docker instance associated with the nextcloud ExApp (typically localhost) but with the open integration, I can use any reachable machine remotely. Or did I misunderstand this?

Chris

1 Like

with the openai integration, I can use any reachable machine remotely

yes

the llm2 app runs the LLM on the docker instance associated with the nextcloud ExApp

the llm2 app runs the llm in docker where your app api deploy daemon is set up. the nice thing is that you can have as many of these as you like, including on remote machines.

1 Like

Ohh, interesting…
So, you can install multiple apps of type llm2 for running on multiple machines?!? This sounds strange, to be honest. I do not know how this would technically work. Could you post a link or something similar?

Chris

multiple apps of type llm2

Ah, no sorry. You can have multiple deploy daemons, e.g. one on localhost and one on a beefy GPU-enabled machine where you run llm2, but you cannot install the same app multiple times.

1 Like

Hi Chris,

I don’t have a Nvidia GPU server. I only have Google Gemini API key, can I enable NextCloud AI Assistant just using Google Gemini API key?

If the answer is yes, can you provide some reference documents?

Thank in advance!

Anthony

That link Chris posted was helpful, I got it working by using this link as the service URL


https://generativelanguage.googleapis.com/v1beta/openai/

Putting in my API key, and refreshing the page to see the models. I hope that helps

1 Like