Experience with Context Chat, Local AI and Nextcloud Assistant

I have now gained 4 days of experience to be able to use Context Chat in Nextcloud. I find the feature very useful, not having to search for everything manually. The installation cost me a lot of nerves. But now it works.

Hardware:
Dell Optiplex 7060, Intel i5 8400, 6 cores up to 4 Ghz. 16 GB RAM with nvme SSD’s.

Software:
Ubuntu 24.04, PHP 8.3, Nextcloud 31 and Apache2.

I know the hardware is not necessarily suitable, but trial and error makes perfect.

First insight: LocalAI runs more resource-efficiently than LargeLanguageModel from the NC ExtApps/AppAPI. I have tried both. With the llm2 from the NC apps, a request with context chat takes 10-15 minutes on my system. LocalAI takes 4-6 minutes for a request, depending on the model I use. I find that reasonable.
(In chat with the AI, it takes a few moments for the answer to arrive, not 5-6 minutes).

I only wanted context chat and voice transcription. I have no use for any other functions.

I tried a few models from the LocalAI Gallery and had good experiences with the following.

  • Whisper-1 for transcription
  • mistral-nemo-instruct-2407, meta-llama-3.1-8b-instruct, llama-3-sauerkrautlm-8b-instruct for Text2Text processing (although they sometimes answer in English as I ask in german)

For Context Chat to work, “Task: Free text to text prompt” must be activated. Also “Chat” and “Chat with Tools”. Otherwise I got no answers.

I installed the local-ai container with the “docker run …” command. I selected latest-cpu as the version. The installation with the script via https://localai.io/install.sh did not work.
I released 5 of the cores for the docker container. I first tried it with 3 cores, but that only resulted in aborted requests. Then it worked with all cores, with 5 too. (docker update --cpus 5 local-ai)

This is how I did it.

  • Install LocalAI on port 8080
  • Nextcloud AI Assistant was already installed.
  • Install “OpenAi LocalAi integration”
  • Now set up LocalAI under Artificial Intelligence in the settings. URL as localhost as previously defined, NO API key if you have not assigned one in the docker run command of LocalAI. Then define the models you want to use.
    Then I first tried if the chat works, as the connection NC to LocalAi. Yes!

Then:

  • Install NC Deploy Deamon (GitHub - nextcloud/docker-socket-proxy: Nextcloud AppAPI Docker Socket Proxy)

  • Then set up the Deploy Deamon under AppAPI in the settings and test the deployment. If everything works - continue.

  • Install context_chat_backend via the command line with occ. There were always problems with the apps. Also via occ, but that was more transparent and in case of errors I could undo everything with “occ app_api:app:unregister --rm-data --force context_chat_backend” and start again.

  • Install context_chat.

Context_Chat_Backend is automatically registered on the Deploy Deamon and then appears in the options under “Artificial intelligence”.

The context chat backend must then be trained or scan the data from the NC (occ context_chat:scan – {USER}).
However, it does not capture all the data at once, but works through it piece by piece. I will probably have to wait a few more days (it has processed 2300 of 43000 files so far). Manual scan or not. The README for the apps suggests some settings that I have used, including the nextcloud-ai-worker@.service.

When testing with “docker stats”, I was able to see that it works. At first the CPU went up for context_chat_backend. Then at “local-ai” to 500%. All 5 cores full throttle. Then 4-5 minutes until the response came.

The hit rate is now around 20%. As a question to the files and meaningful answer.

I have not yet managed to query calendars, tasks and mails with Context_Agent.

I hope my report helps one or the other when using the Nextcloud Assistant with the Context Chat.

4 Likes

Thanks for sharing your experience. Interesting.

I’ve got all the Ai features working except the context chat which is a mess to install or get working.
I’m stuck at installing the context chat backend.
it simply refuses to deploy, wether I try it via the app page on nextcloud or use the OCC command. It will just hang at “health checking”.
Sometimes it even went past that to “Initializing” but then got stuck again and I gave up.

My Ai on nextcloud ( the part that is working :smiley: ) is supposed to make email summaries as one of its tasks.
Well, they are wonky at best. also no place to change the prompt for this summary.
Also, Nextcloud didn’t reserve a spot for these summaries, so the only place they appear are under the title of a mail, which is usually unreadable.
Summaries and quick reply functions of the AI in the mail app is sporadic.
sometimes there are quick replies, somtimes not.
Sometimes it makes a summary, other time it doesn’t.
see below for an example of the bad implementation im talking about:

Here we only see “Here is a summary of the email message in under 1024 characters”, since that is probably what Nextcloud is prompting it to do.

Why can’t we change this ? I don’t want the AI to tell what he is doing.
He should just make a summary right away. Where can we change it ? and if not changable? why not ? This is unusable and seems more like another AI feature gimmick to me personally.

Also I am curious if you could have achieved the same results using “Ollama” instead of LocalAI?
Personally i’ve tried LocalAI. it wasn’t bad but found that Ollama seems more mature when it comes to managing the models and etc.
Now I have Ollama connected to the nextcloud with a light llama3.1 model to do the simple tasks of summaries and such on Nextcloud but for all the other serious tasks where I need a local AI, I just use Ollama + Open WebUI.

Have you tried removing everything with “occ app_api:app:unregister --rm-data --force context_chat_backend”? And remove docker container and images? I’ve done this several times until it ran smoothly and was installed.

Is the screenshot a notification from Mail app?

Yes, I’ve done this about 2 times and reinstalled. that’s why I saw the “initializing” phase once, and never again. I feel the fact that it’s so variable and needs a couple reinstalls ( many users experience this ) means to me personally that this part of the AI integration needs more work done.
The fact that I have to pull the image manually because I can’t pull it from the app store is obviously also an issue.

And yes, that’s a screenshot of the current implementation of Nextcloud’s AI to make mail summaries ( for my version 31 ). I’ve turned it off now since the AI “summary” in the mail titles actually makes it harder to know what’s inside a mail than without, and that summary window on top of opened mails advertised in Nextcloud here:
Meet the first open-source AI assistant that doesn’t prey on your data
doesn’t work on most mails for some reason and once in a while if im lucky I get a top positioned summary window