Problems with Nextcloud and Local AI

I think the AI/Assistant not works right now at NC30.

LocalAI API endpoint not working · Issue #129 · nextcloud/assistant · GitHub see 417 HTTP errors

@naruto
Thank you for your response!

However, I believe there may have been a misunderstanding. My issue isn’t related to the integration of Local AI with Nextcloud AIO at this point. The problem I am facing is that the Local AI service itself is not working. Although I can access the WebUI, the Chat feature doesn’t respond, and models fail to load with various gRPC errors in the logs.

Here are the key errors I’m seeing:

4:28AM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama-cpp-fallback
4:28AM DBG GRPC Service for Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf will be running at: '127.0.0.1:42345'
4:28AM DBG GRPC Service state dir: /tmp/go-processmanager2255331681
4:28AM DBG GRPC Service Started
4:28AM DBG Wait for the service to start up
4:29AM ERR failed starting/connecting to the gRPC service error="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:42345: connect: connection refused\""
4:29AM DBG GRPC Service NOT ready
4:29AM INF [llama-cpp] Fails: grpc service not ready
4:29AM INF [llama-cpp] Autodetection failed, trying the fallback
4:29AM INF Loading model 'Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf' with backend llama-cpp-fallback

Screenshot -
local-ai-screen_converted.pdf (60,6 KB)

Since Local AI isn’t functioning properly, I’m currently unable to proceed with any integration or settings in Nextcloud AIO. My primary focus right now is to resolve the issues with Local AI on my server.

Any help with getting Local AI itself up and running would be greatly appreciated!

1 Like

I managed to solve the reason why LocalAI was not working 100% for me. I forgot to mention that my Ubuntu server is running as Proxmox VM.

If you’re having trouble getting LocalAI to run successfully in Proxmox, you might need to enable the host CPU configuration for your virtual machine. Here are the steps to do this via the Proxmox GUI:

  1. Open Proxmox Web Interface: Access the Proxmox web GUI by navigating to your Proxmox host’s IP address in your browser.

  2. Select Your VM: In the left sidebar, click on the VM you want to configure.

  3. Navigate to Hardware: Go to the Hardware tab, where you can see all of the VM’s configured components.

  4. Edit Processor Settings: Click on Processor to open the settings for the CPU.

  5. Change CPU Type to Host: In the Type dropdown, select host. This setting will allow the VM to use the same CPU features as the host machine, which can significantly improve compatibility and performance for LocalAI and other resource-intensive applications.

  6. Apply and Restart: Click OK to save the changes. Then, restart your VM for the changes to take effect.

This configuration can help LocalAI work properly by ensuring the virtual machine has access to all the advanced features of your physical CPU, which is often necessary for running AI models effectively.

Would you post her what exactly you have changed and made it work?
I have the same problem with nextcloud assistant.
Very slow - 3 or 4 minutes waiting for nothing - how can i make it faster with this background-job:worker?

Same problem here. It has a delay to start the process. How can we make it to start immediate?

It works (kind of… really slow) using localAI, but it does not work using OpenAI.

OpenAI is working fine for me.
It was easy to set it up.

Its working for you using NC30? I have the same 417 http error as @kinimodmeyer
OpenAI is far easier than localAI, you only have to create an API key and connect it from nextcloud, but i was unable to get it working.
LocalAI is too slow. You can reduce the wait time editing the NC cron job. I reduced from 5 to 1 minute. But i think this design is not ideal.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.