Hi there,
I try to use an Azure OpenAI model deployment endpoint with AI Assistant. Should that work? I get this error. In the error it uses this url GET https://api.openai.com/v1/models, it might be just a hardcoded error message as I’ve entered another address (see curl command below) in Service-URL.
It also mentions this:
With the current configuration, the target URL used to get the models is:
https://dasa-astus2.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-06-01
However apparently the models endpoint isn’t implemented in Azure. I don’t know if it needs t get all models or if it can use the command under Service-URL.
Thank you.
ClientException
Client error: GET https://api.openai.com/v1/models
resulted in a 401 Unauthorized
response: { “error”: { “message”: "Incorrect API key provided: 7kqudMVn***************************************************** (truncated…)
API request error : Client error: GET https://api.openai.com/v1/models
resulted in a 401 Unauthorized
response: { “error”: { “message”: "Incorrect API key provided: 7equdMVn***************************************************** (truncated…)
or this error:
integration_openai
Error retrieving models (exc): API request error: Resource not found
This is the curl command which works.
curl -X POST “https://dasa-astus2.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-06-01”
-H “Content-Type: application/json”
-H “api-key: 7equdMVn*****************************************************”
-d ‘{“messages”: [{“role”: “user”, “content”: “Hi!”}]}’