Ollama integration with Nexcloud

I thought I’d would be really swell to have a Nextcloud app that can connect with Ollama, so users (all right, just now just me) can have an conversation with my personal AI.

I’d like to pay for it if necessary (although I’m not rich at alll)

2 Likes

HI !

it is already possible to make ollama’s api compatible to openai format thanks to : https://litellm.ai/

It works well, and easy to setup … i can do all the gpt stuff like prompting, summarizing … all text tasks … but i have no result yet on generating images or using the voice transcription, as stablediffusion and whisper are not on ollama 's modele library yet … if you have some good advice …

Reminder : you have to install before openai application and the nextcloud assistant module before pointing your api url (from openai module in administration menu) and select your LLM

Here is some docs found on how to implement whisper on ollama (have not read it yet) :
https://medium.com/@kolbeuk/prototyping-open-source-speech-recognition-and-translation-ca571b3c4293

Thank you :slight_smile: I’ll check it out the next week. for the rest, I’m just figuring it all out :wink:

Hi, not OP, but how did you manage this?

I installed ollama with a litellm model, but i only get errors when using the assistant;

code: “ERR_BAD_REQUEST”

config: Object { timeout: 0, xsrfCookieName: “XSRF-TOKEN”, xsrfHeaderName: “X-XSRF-TOKEN”, … }

message: “Request failed with status code 400”

name: “AxiosError”

request: XMLHttpRequest { readyState: 4, timeout: 0, withCredentials: false, … }

response: Object { data: “LanguageModel call using provider LocalAI integration failed: OpenAI/LocalAI request failed: API request error: model ‘gpt-3.5-turbo’ not found, try pulling it first”, status: 400, statusText: “”, … }

How did you set it up at your instance?

1 Like