[SOLVED] Context Chat faild

Intro

I’ve install a new nextcloud context chat app by following the installation guide (Context Chat Backend - Apps - App Store - Nextcloud).
I try to ask the about the content of a specified given document the first time.
I addition i’ve installed local Ai and run it without any problem.

context_chat_backend:4.0.3

Some or all of the below information will be requested if it isn’t supplied; for fastest response please provide as much as you can.

The Basics

  • Nextcloud Server version (e.g., 29.x.x):
    • Nextcloud Hub 9 (30.0.4), docker
  • Operating system and version (e.g., Ubuntu 24.04):
    • docker
  • Web server and version (e.g, Apache 2.4.25):
    • docker
  • Reverse proxy and version _(e.g. nginx 1.27.2)
    • docker
  • PHP version (e.g, 8.3):
    • docker
  • Is this the first time you’ve seen this error? (Yes / No):
    • Yes
  • When did this problem seem to first start?
    • asking a question about a specified document
  • Installation method (e.g. AlO, NCP, Bare Metal/Archive, etc.)
    • docker
  • Are you using CloudfIare, mod_security, or similar? (Yes / No)
    • No

Docker Container

Steps to replicate it (hint: details matter!):

  1. Installing Nextcloud 30.0.4
  2. ExApi
  3. Installing nextcloud context chat Backend and app
  4. ask question about a given document

Log entries

Nextcloud log

{"reqId":"3ltkAPmSJ7ebDeVAuMJ4","level":3,"time":"2025-01-10T11:50:15+00:00","remoteAddr":"62.96.13.186","user":"xxxxxxx","app":"no app in context","method":"GET","url":"/ocs/v2.php/teams/resources/account/022b4169-5247-4b4c-a605-f540fd24c996","message":"No provider found for id account","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36","version":"30.0.4.1","exception":{"Exception":"RuntimeException","Message":"No provider found for id account","Code":0,"Trace":[{"file":"/var/www/html/lib/private/Teams/TeamManager.php","line":91,"function":"getProvider","class":"OC\\Teams\\TeamManager","type":"->","args":["account"]},{"file":"/var/www/html/core/Controller/TeamsApiController.php","line":68,"function":"getTeamsForResource","class":"OC\\Teams\\TeamManager","type":"->","args":["account","022b4169-5247-4b4c-a605-f540fd24c996","TFroese"]},{"file":"/var/www/html/lib/private/AppFramework/Http/Dispatcher.php","line":208,"function":"listTeams","class":"OC\\Core\\Controller\\TeamsApiController","type":"->","args":["account","022b4169-5247-4b4c-a605-f540fd24c996"]},{"file":"/var/www/html/lib/private/AppFramework/Http/Dispatcher.php","line":114,"function":"executeController","class":"OC\\AppFramework\\Http\\Dispatcher","type":"->","args":[{"__class__":"OC\\Core\\Controller\\TeamsApiController"},"listTeams"]},{"file":"/var/www/html/lib/private/AppFramework/App.php","line":161,"function":"dispatch","class":"OC\\AppFramework\\Http\\Dispatcher","type":"->","args":[{"__class__":"OC\\Core\\Controller\\TeamsApiController"},"listTeams"]},{"file":"/var/www/html/lib/private/Route/Router.php","line":302,"function":"main","class":"OC\\AppFramework\\App","type":"::","args":["OC\\Core\\Controller\\TeamsApiController","listTeams",{"__class__":"OC\\AppFramework\\DependencyInjection\\DIContainer"},{"providerId":"account","resourceId":"022b4169-5247-4b4c-a605-f540fd24c996","_route":"ocs.core.teamsapi.listteams"}]},{"file":"/var/www/html/ocs/v1.php","line":43,"function":"match","class":"OC\\Route\\Router","type":"->","args":["/ocsapp/teams/resources/account/022b4169-5247-4b4c-a605-f540fd24c996"]},{"file":"/var/www/html/ocs/v2.php","line":7,"args":["/var/www/html/ocs/v1.php"],"function":"require_once"}],"File":"/var/www/html/lib/private/Teams/TeamManager.php","Line":65,"message":"No provider found for id account","exception":{},"CustomMessage":"No provider found for id account"}}
{"reqId":"GI5HOn6lsvzzdoVEHFAn","level":3,"time":"2025-01-12T14:15:10+00:00","remoteAddr":"","user":"--","app":"context_chat","method":"","url":"/cron.php","message":"Error received from Context Chat Backend (ExApp)","userAgent":"--","version":"30.0.4.1","data":{"app":"context_chat","code":"404","response":"{\"error\":\"Error: source id not found\"}"}}

context_chat_backend log

1', 'server': ('1X2.XX.XX.3', 23001), 'client': ('1X2.XX.XX.31', 46400), 'scheme': 'http', 'method': 'PUT', 'root_path': '', 'path': '/loadSources', 'raw_path': b'/loadSources', 'query_string': b'', 'headers': '<...>', 'state': {}}
TRACE:    1X2.XX.XX.31:46400 - ASGI [11] Receive {'type': 'http.request', 'body': '<122368 bytes>', 'more_body': True}
TRACE:    1X2.XX.XX.31:46400 - ASGI [11] Receive {'type': 'http.request', 'body': '<77698 bytes>', 'more_body': False}
Embedding sources:
files__default: 11098 (Documents/Tesla Angebote/2019-05-08_Tesla_Model3.pdf)
Try 0 failed in exception
Try 1 failed in exception
Try 2 failed in exception
Try 3 failed in exception
Try 4 failed in exception
Try 5 failed in exception
Try 6 failed in exception
Try 7 failed in exception
Try 8 failed in exception
Try 9 failed in exception
Try 10 failed in exception
Try 11 failed in exception
Try 12 failed in exception
Try 13 failed in exception
Try 14 failed in exception
Try 15 failed in exception
Try 16 failed in exception
Try 17 failed in exception
Try 18 failed in exception
Try 19 failed in exception
Error: failed to start the embedding server
check_sources took 5.66ms
Filtered sources: ['files__default: 11098']
processing source: files__default: 11098
decoded non empty source: files__default: 11098
Converted sources to documents
--- Logging error ---
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py", line 101, in map_httpcore_exceptions
    yield
  File "/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py", line 250, in handle_request
    resp = self._pool.handle_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/httpcore/_sync/connection_pool.py", line 256, in handle_request
    raise exc from None
  File "/usr/local/lib/python3.11/dist-packages/httpcore/_sync/connection_pool.py", line 236, in handle_request
    response = connection.handle_request(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/httpcore/_sync/connection.py", line 101, in handle_request
    raise exc
  File "/usr/local/lib/python3.11/dist-packages/httpcore/_sync/connection.py", line 78, in handle_request
    stream = self._connect(request)
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/httpcore/_sync/connection.py", line 124, in _connect
    stream = self._network_backend.connect_tcp(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/httpcore/_backends/sync.py", line 207, in connect_tcp
    with map_exceptions(exc_map):
  File "/usr/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.11/dist-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 99] Cannot assign requested address
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/app/context_chat_backend/network_em.py", line 41, in _get_embedding
    response = client.post(
               ^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/httpx/_client.py", line 1144, in post
    return self.request(
           ^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/httpx/_client.py", line 825, in request
    return self.send(request, auth=auth, follow_redirects=follow_redirects)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/httpx/_client.py", line 914, in send
    response = self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/httpx/_client.py", line 942, in _send_handling_auth
    response = self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/httpx/_client.py", line 979, in _send_handling_redirects
    response = self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/httpx/_client.py", line 1014, in _send_single_request
    response = transport.handle_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py", line 249, in handle_request
    with map_httpcore_exceptions():
  File "/usr/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 99] Cannot assign requested address
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/app/context_chat_backend/vectordb/pgvector.py", line 122, in add_indocuments
    chunk_ids = self.client.add_documents(indoc.documents)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain_core/vectorstores/base.py", line 287, in add_documents
    return self.add_texts(texts, metadatas, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain_postgres/vectorstores.py", line 885, in add_texts
    embeddings = self.embedding_function.embed_documents(texts_)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/context_chat_backend/network_em.py", line 63, in embed_documents
    return self._get_embedding(texts)  # pyright: ignore[reportReturnType]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/context_chat_backend/network_em.py", line 47, in _get_embedding
    raise EmbeddingException('Error: request to get embeddings failed') from e
context_chat_backend.types.EmbeddingException: Error: request to get embeddings failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.11/logging/__init__.py", line 1110, in emit
    msg = self.format(record)
          ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/logging/__init__.py", line 953, in format
    return fmt.format(record)
           ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/logging/__init__.py", line 687, in format
    record.message = record.getMessage()
                     ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/logging/__init__.py", line 377, in getMessage
    msg = msg % self.args
          ~~~~^~~~~~~~~~~
TypeError: not all arguments converted during string formatting
Call stack:
  File "/usr/lib/python3.11/threading.py", line 1002, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
    self.run()
  File "/usr/local/lib/python3.11/dist-packages/anyio/_backends/_asyncio.py", line 1005, in run
    result = context.run(func, *args)
  File "/app/context_chat_backend/controller.py", line 167, in wrapper
    return func(*args, **kwargs)
  File "/app/context_chat_backend/controller.py", line 330, in _
    added_sources = exec_in_proc(target=embed_sources, args=(vectordb_loader, app.extra['CONFIG'], sources))
  File "/app/context_chat_backend/utils.py", line 111, in exec_in_proc
    p.start()
  File "/usr/lib/python3.11/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/usr/lib/python3.11/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/usr/lib/python3.11/multiprocessing/context.py", line 281, in _Popen
    return Popen(process_obj)
  File "/usr/lib/python3.11/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/usr/lib/python3.11/multiprocessing/popen_fork.py", line 71, in _launch
    code = process_obj._bootstrap(parent_sentinel=child_r)
  File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/app/context_chat_backend/utils.py", line 94, in exception_wrap
    resconn.send({ 'value': fun(*args, **kwargs), 'error': None })
  File "/app/context_chat_backend/chain/ingest/injest.py", line 171, in embed_sources
    return _process_sources(vectordb, config, sources_filtered)
  File "/app/context_chat_backend/chain/ingest/injest.py", line 139, in _process_sources
    added_sources = vectordb.add_indocuments(indocuments)
  File "/app/context_chat_backend/vectordb/pgvector.py", line 136, in add_indocuments
    log_error('Error adding documents to vectordb:', e)
Message: 'Error adding documents to vectordb:'
Arguments: (EmbeddingException('Error: request to get embeddings failed'),)
Added documents to vectordb
Count of newly loaded sources: 0 / 1 
Sources: []
TRACE:    1X2.XX.XX.31:46400 - ASGI [11] Send {'type': 'http.response.start', 'status': 200, 'headers': '<...>'}
INFO:     1X2.XX.XX.31:46400 - "PUT /loadSources HTTP/1.1" 200 OK
TRACE:    1X2.XX.XX.31:46400 - ASGI [11] Send {'type': 'http.response.body', 'body': '<21 bytes>'}
TRACE:    1X2.XX.XX.31:46400 - ASGI [11] Completed

Configuration

Apps

Enabled:
  - activity: 3.0.0
  - admin_audit: 1.20.0
  - app_api: 4.0.3
  - assistant: 2.2.0
  - bookmarks: 15.0.4
  - bruteforcesettings: 3.0.0
  - calendar: 5.0.8
  - circles: 30.0.0
  - cloud_federation_api: 1.13.0
  - comments: 1.20.1
  - contacts: 6.1.3
  - context_chat: 4.0.2
  - dashboard: 7.10.0
  - dav: 1.31.1
  - deck: 1.14.2
  - drawio: 3.0.3
  - encryption: 2.18.0
  - federatedfilesharing: 1.20.0
  - federation: 1.20.0
  - files: 2.2.0
  - files_automatedtagging: 1.20.0
  - files_markdown: 2.4.1
  - files_pdfviewer: 3.0.0
  - files_reminders: 1.3.0
  - files_sharing: 1.22.0
  - files_trashbin: 1.20.1
  - files_versions: 1.23.0
  - firstrunwizard: 3.0.0
  - gpoddersync: 3.11.0
  - groupfolders: 18.0.8
  - integration_openai: 3.3.0
  - integration_paperless: 1.0.4
  - logreader: 3.0.0
  - lookup_server_connector: 1.18.0
  - mail: 4.1.1
  - maps: 1.5.0
  - nextcloud_announcements: 2.0.0
  - notes: 4.11.0
  - notifications: 3.0.0
  - oauth2: 1.18.1
  - password_policy: 2.0.0
  - photos: 3.0.2
  - privacy: 2.0.0
  - provisioning_api: 1.20.0
  - recommendations: 3.0.0
  - related_resources: 1.5.0
  - repod: 3.5.4
  - richdocuments: 8.5.3
  - serverinfo: 2.0.0
  - settings: 1.13.0
  - sharebymail: 1.20.0
  - side_menu: 4.0.1
  - support: 2.0.0
  - survey_client: 2.0.0
  - suspicious_login: 8.0.0
  - systemtags: 1.20.0
  - text: 4.1.0
  - theming: 2.5.0
  - twofactor_backupcodes: 1.19.0
  - updatenotification: 1.20.0
  - user_status: 1.10.0
  - viewer: 3.0.0
  - weather_status: 1.10.0
  - webhook_listeners: 1.1.0-dev
  - workflowengine: 2.12.0
Disabled:
  - contactsinteraction: 1.11.0 (installed 1.11.0)
  - files_downloadlimit: 3.0.0 (installed 2.0.0)
  - files_external: 1.22.0 (installed 1.22.0)
  - llm: 1.2.1 (installed 1.2.1)
  - nextpod: 0.7.6 (installed 0.7.6)
  - twofactor_nextcloud_notification: 4.0.0
  - twofactor_totp: 12.0.0-dev
  - user_ldap: 1.21.0 (installed 1.21.0)

context_chat_backend

config.yaml

# SPDX-FileCopyrightText: 2024 Nextcloud GmbH and Nextcloud contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
debug: true
disable_aaa: false
httpx_verify_ssl: true
use_colors: true
uvicorn_workers: 1
embedding_chunk_size: 2000
doc_parser_worker_limit: 10


vectordb:
  pgvector:
    # 'connection' overrides the env var 'CCB_DB_URL'

  chroma:
    is_persistent: true
    # chroma_server_host:
    # chroma_server_http_port:
    # chroma_server_ssl_enabled:
    # chroma_server_api_default_path:

  weaviate:
    # auth_client_secret:
    # url: http://localhost:8080

embedding:
  protocol: http
  host: localhost
  port: 5000
  workers: 1
  offload_after_mins: 15 # in minutes
  request_timeout: 1800 # in seconds
  llama:
    # 'model_alias' is reserved
    # 'embedding' is always set to True
    model: multilingual-e5-large-instruct-q6_k.gguf
    n_batch: 16
    n_ctx: 8192

llm:
  nc_texttotext:

  llama:
    model_path: dolphin-2.2.1-mistral-7b.Q5_K_M.gguf
    n_batch: 512
    n_ctx: 8192
    max_tokens: 4096
    template: "<|im_start|> system \nYou're an AI assistant named Nextcloud Assistant, good at finding relevant context from documents to answer questions provided by the user. <|im_end|>\n<|im_start|> user\nUse the following documents as context to answer the question at the end. REMEMBER to excersice source critisicm as the documents are returned by a search provider that can return unrelated documents.\n\nSTART OF CONTEXT: \n{context} \n\nEND OF CONTEXT!\n\nIf you don't know the answer or are unsure, just say that you don't know, don't try to make up an answer. Don't mention the context in your answer but rather just answer the question directly. Detect the language of the question and make sure to use the same language that was used in the question to answer the question. Don't mention which language was used, but just answer the question directly in the same langauge. \nQuestion: {question} Let's think this step-by-step. \n<|im_end|>\n<|im_start|> assistant\n"
    no_ctx_template: "<|im_start|> system \nYou're an AI assistant named Nextcloud Assistant.<|im_end|>\n<|im_start|> user\n{question}<|im_end|>\n<|im_start|> assistant\n"
    end_separator: "<|im_end|>"

  ctransformer:
    model: dolphin-2.2.1-mistral-7b.Q5_K_M.gguf
    template: "<|im_start|> system \nYou're an AI assistant named Nextcloud Assistant, good at finding relevant context from documents to answer questions provided by the user. <|im_end|>\n<|im_start|> user\nUse the following documents as context to answer the question at the end. REMEMBER to excersice source critisicm as the documents are returned by a search provider that can return unrelated documents.\n\nSTART OF CONTEXT: \n{context} \n\nEND OF CONTEXT!\n\nIf you don't know the answer or are unsure, just say that you don't know, don't try to make up an answer. Don't mention the context in your answer but rather just answer the question directly.  Detect the language of the question and make sure to use the same language that was used in the question to answer the question. Don't mention which language was used, but just answer the question directly in the same langauge. \nQuestion: {question} Let's think this step-by-step. \n<|im_end|>\n<|im_start|> assistant\n"
    no_ctx_template: "<|im_start|> system \nYou're an AI assistant named Nextcloud Assistant.<|im_end|>\n<|im_start|> user\n{question}<|im_end|>\n<|im_start|> assistant\n"
    end_separator: "<|im_end|>"
    config:
      context_length: 8192
      max_new_tokens: 4096
      local_files_only: True

  hugging_face:
    model_id: gpt2
    task: text-generation
    pipeline_kwargs:
      config:
        max_length: 200
    template: ""

Habe mal folgende Fehlermeldung weiter analysiert:

Alle IP Adressen sind aus dem Container erreichbar.

I found the problem!
The docker cointainer didn’t resolve ‘localhost’ to an IP adress!
This i found in the log file embedding_model.log inside the container

Therefore i changed the
config.yaml as following:

embedding:
host: 127.0.0.1
1 Like

This topic was automatically closed 8 days after the last reply. New replies are no longer allowed.