Just sharing my journey while getting to work the new AI assistant on Nextcloud 27.1.2. Hope it helps others till we have the beginners guide to AI
Installed the App and set the LLM, downloaded the model (gpt4all) as per instruction with OCC. After a while, download complete, checked config screens in admin which say all good (green).
After asking Assistant with ‘free prompt’ question it fails however. Getting notification about that without much info. At the same time Nextcloud log shows:
(read from bottom up):
Error PHP Undefined variable $notificationActionLabel at /var/www/nextcloud/apps/assistant/lib/Listener/TaskFailedListener.php#46
Info no app in context RuntimeException: LLM process failed: process exited with code 1
Warning llm Traceback (most recent call last): File “/var/www/nextcloud/apps/llm/src-py/index.py”, line 3, in from chains.formalize import FormalizeChain File “/var/www/nextcloud/apps/llm/src-py/chains/formalize.py”, line 2, in from langchain import BasePromptTemplate, PromptTemplate ModuleNotFoundError: No module named ‘langchain’
Looking at github it showed me that meant not meeting the Python requirement:
So I installed Python env with:
apt install python3.10-venv
That didn’t make difference same error in log. Then I’d run:
Besides some calendar entries fixed nothing unusual, however it took some time to download llm dependencies and now …
Tested the following with the assistant:
Answer on “How hot is the sun?”
Comes back after ~2 minutes with: “The surface of the sun is about 5,500 degrees Celsius”.
Now I wonder why it took 1-2 minutes on my server with I7-5820 CPU - 12Mb, only in the last 10 seconds I saw a brief increase in CPU use and then the prompt returning with the answer.
That makes me wonder if it possible to expedite/prioritise what seems to be a tasker or scheduler operating? Or is there something else taking its time?