Ollama integration with Nexcloud

I thought I’d would be really swell to have a Nextcloud app that can connect with Ollama, so users (all right, just now just me) can have an conversation with my personal AI.

I’d like to pay for it if necessary (although I’m not rich at alll)

3 Likes

HI !

it is already possible to make ollama’s api compatible to openai format thanks to : https://litellm.ai/

It works well, and easy to setup … i can do all the gpt stuff like prompting, summarizing … all text tasks … but i have no result yet on generating images or using the voice transcription, as stablediffusion and whisper are not on ollama 's modele library yet … if you have some good advice …

Reminder : you have to install before openai application and the nextcloud assistant module before pointing your api url (from openai module in administration menu) and select your LLM

Here is some docs found on how to implement whisper on ollama (have not read it yet) :
https://medium.com/@kolbeuk/prototyping-open-source-speech-recognition-and-translation-ca571b3c4293

Thank you :slight_smile: I’ll check it out the next week. for the rest, I’m just figuring it all out :wink:

Hi, not OP, but how did you manage this?

I installed ollama with a litellm model, but i only get errors when using the assistant;

code: “ERR_BAD_REQUEST”

config: Object { timeout: 0, xsrfCookieName: “XSRF-TOKEN”, xsrfHeaderName: “X-XSRF-TOKEN”, … }

message: “Request failed with status code 400”

name: “AxiosError”

request: XMLHttpRequest { readyState: 4, timeout: 0, withCredentials: false, … }

response: Object { data: “LanguageModel call using provider LocalAI integration failed: OpenAI/LocalAI request failed: API request error: model ‘gpt-3.5-turbo’ not found, try pulling it first”, status: 400, statusText: “”, … }

How did you set it up at your instance?

1 Like

Willing to activate IA assistant too.
https://docs.nextcloud.com/server/latest/admin_manual/ai/index.html

Got those 3 App installed

Config LocalAI server (pointing to localhost ollama)

But nothing is working
Anyone have a better experience to share ?

No luck here either with open-webui. (My workaround is to use LocalAI).

I have an open-webui chat working. I’m trying out the mistral-nemo model. Happy to share my Docker compose file if anyone wants it.

Nextcloud can’t use open-webui.

Starting at /settings/admin/ai in Nextcloud, I’m able to enter my open-webui server URL and API key. I then see a GET to /v1/models in the open-webui server log, but, confusingly, the XHR request from the Nextcloud AI settings page to /apps/integration_openai/models throws a HTTP 500 server error (in the browser console only – /settings/admin/ai doesn’t show an error). Maybe it doesn’t like the response from open-webui?

I’m able to try text generation and I see POSTs from Nextcloud to /v1/chat/completions when I do, but that endpoint always returns a HTTP 500 server error. open-webui says ERROR: Exception in ASGI application, a big stack trace, then IndexError: tuple index out of range.

These look relevant/related, or at least may contain hints on how we might get this to work:

Connecting NextCloud with open webui finish in 500 errors too

I tried by launching LocalAI (through docker)
Changing URL for this one does not make it work.

How did you configure NextCLoud with LocalAI?

I think I understood how to activate IA in NextCloud.
It needs App like “NextCLoud Assistant” & “Context Chat

… but also AppAPI and extra ExtApps
like the context_chat_backend ExternalApp

Not sure what llm2 is providing compared with LocalAI which seems to be used by Mail+LLM

But activating LocalAI with Ollama leads to instability (Memory Overflow ?)
So I did not activate LocalAI and kept ollama+open-webui in a “External Link” menu.