I’m a total beginner with NextCloud and looking for some direction…
I’m trying to implement both the LLM and the image diffusion capabilities in NextCloud and I’m having trouble understanding what my options are and how it works. There are a few posts here and on the web that are relevant, but most are old and don’t provide very much context.
I have a separate AI server on the same private network/subnet. I’d ideally like to access the LLM/Diffusion APIs via the local private network, so I don’t have to add a public URL for these services.
Is this possible? I have ‘allow_local_remote_servers’ set to True. Any other settings I need to change? Or, am I required to make my API URL public? I also don’t understand if I need to set anything up under AppAPI?
In the Administration > Artificial Intelligence section, I’ve put my local OpenAI-compatible API URL (i.e. - http://192.168.1.5:5000
) in the ‘Service URL’ under ‘OpenAI and LocalAI integration’.
My assumption would be that once the above URL is entered, I would be able to select one of my local models under ‘Default completion model to use’ (i.e. - Llama 3.1). But, the only options here seem to be ‘gpt-3.5-turbo’ and ‘text-embedding-ada-002’.
When I attempt to generate text, I get: ‘RuntimeException Error during request to ExApp (context_chat_backend): ExApp ‘context_chat_backend’ not found’.
My last question is in regards to connecting to the Forge Stable Diffusion API, to get my NextCloud’s ‘Generate Image’ to work. Since my image gen URL/API is different from my text gen URL/API, I don’t understand where I would enter that. It seems I can have only one LocalAI integration? Is this handled in another section?
Apps Installed/Active:
- Nextcloud Assistant
- Nextcloud Assistant Context Chat
- OpenAI and LocalAI integration
- AppAPI
NextCloud AIO 29.0.4 / Docker