I’m trying to setup an instance with a localai using Ollama but I’m not able to get results and I’m not sure how to identify what I’m missing. I seem to have all neccessary apps installed. I’m running Ollama on a local server that is accessible within the internal network of my self-hosted Nextcloud server. If I run queries off the server directly or through the OpenWebUI everything is great. Nextcloud is picking up the all of my models from the URL so I’m pretty sure it is talking. But I cannot get any results to return from the Assistant. Anyone know of some steps I may have missed or how to seek where the failure is taking place? It doesn’t even seem to timeout or provide any error messages for me… just keeps spinning…
Maybe you are missing the worker service!?
Yep. I’m 99% sure this is the case, since models seem to load, but the response just takes a veeeery long time since Nextcloud will queue them up, and not run directly.
I have 6 workers set up, also use Ollama ( with llama 3.2 ), and the response is usually almost instantly, unless it has to return a very long file/code, then it takes around a max. of 7 seconds to complete.
After you’ve followed the steps to create your workers, you could use a script like this to start them up (6 workers in this specific case):
#!/bin/bash
# Log file for errors and status
log_file="/var/log/nextcloud_workers.log"
# Start all workers
for i in {1..6}; do
echo "Starting nextcloud-ai-worker@$i.service" | tee -a $log_file
sudo systemctl start nextcloud-ai-worker@$i.service 2>> $log_file
if systemctl is-active --quiet nextcloud-ai-worker@$i.service; then
echo "nextcloud-ai-worker@$i.service started successfully." | tee -a $log_file
else
echo "Failed to start nextcloud-ai-worker@$i.service. Check logs for details." | tee -a $log_file
fi
done
Good luck and get that Nextcloud Assistant runnning
you Ollama end point should be this http://abc.com:11434/v1,this will pick up all the models.
I am using Ollama with OpenWebUI .It is very efficient .Willing to help
Yes, I have the models all showing up. But nothing works past that. I assume the workers have to be set up manually as others mention?
I did not set up any workers ,which app are you using to connect with backend
I’m just trying the Assistant.
Download openwebAI and Local AI app ,punch in the info there it will take care of worker set up etc
AFAIK, you have to set up the workers manually. there is no automatic setup. If you don’t do this, prepare to wait a couple minutes each time you prompt the AI for something, which I’m pretty sure is not what you want.
I trust you ,however I haven no issues till date .Are threre any documentation related to workflow.Will give it a try
Did you read up the documentation? yes, you won’t have any issues. Just major delays if you ask the AI something. I’m pretty sure you’re not receiving answers within 10 seconds if you haven’t set up any workers.
Take the time and read the official documentation regarding workers.
Yes to all that said the workers needed to be added… I did the work (was easier than I thought) and voila! works like magic! Excited to see what all we can do now! Thanks all around!