Help Enabling LLM and Diffusion in NextCloud

, , ,

I’m a total beginner with NextCloud and looking for some direction…

I’m trying to implement both the LLM and the image diffusion capabilities in NextCloud and I’m having trouble understanding what my options are and how it works. There are a few posts here and on the web that are relevant, but most are old and don’t provide very much context.

I have a separate AI server on the same private network/subnet. I’d ideally like to access the LLM/Diffusion APIs via the local private network, so I don’t have to add a public URL for these services.

Is this possible? I have ‘allow_local_remote_servers’ set to True. Any other settings I need to change? Or, am I required to make my API URL public? I also don’t understand if I need to set anything up under AppAPI?

In the Administration > Artificial Intelligence section, I’ve put my local OpenAI-compatible API URL (i.e. - http://192.168.1.5:5000) in the ‘Service URL’ under ‘OpenAI and LocalAI integration’.

My assumption would be that once the above URL is entered, I would be able to select one of my local models under ‘Default completion model to use’ (i.e. - Llama 3.1). But, the only options here seem to be ‘gpt-3.5-turbo’ and ‘text-embedding-ada-002’.

When I attempt to generate text, I get: ‘RuntimeException Error during request to ExApp (context_chat_backend): ExApp ‘context_chat_backend’ not found’.

My last question is in regards to connecting to the Forge Stable Diffusion API, to get my NextCloud’s ‘Generate Image’ to work. Since my image gen URL/API is different from my text gen URL/API, I don’t understand where I would enter that. It seems I can have only one LocalAI integration? Is this handled in another section?

Apps Installed/Active:

  • Nextcloud Assistant
  • Nextcloud Assistant Context Chat
  • OpenAI and LocalAI integration
  • AppAPI

NextCloud AIO 29.0.4 / Docker

Just checking back in. Is no one using a local AI/ML with NextCloud?

I’ve finally gotten the stock LLM2 running on 29.0.7, ubuntu 20.04 under nginx with PHP8.2-fpm.

Local Whispter Text-to-Speech ExApp under “Tools.”
Local Large Language Model ExApp under “Tools.”
Test Deploy ExApp.
Context Chat Backend ExApp under “Office and text.” (This time it isn’t active, but did require it.)
Nextcloud Assistant App.
AppAPI App.

The context chat requires both the context chat app under Apps as well as the ExApp context chat and the Context Chat Backend.

The whole AI attempt is to use the Health Ranger LLM for medicine responses that are more inline with my comfort level with alternative medicines and the use of ivermectin. That requires copying the gguf into the /nc… folder in the LLM2 docker file, but I cannot get the dot json file correct. I had it there once, but had residual docker containers (total seven) in the filesystem. This ran my server idle load to 80 instead of between 1 and 2. This is on an older Dell T410 24 thread server with 128Gb of memory.

I also use the docker socket to access docker containers if that helps.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.