Skip to main content

Azure OpenAI Models

To use a language model hosted on Azure OpenAI, specify the azure path in the from field and the following parameters from the Azure OpenAI Model Deployment page:

ParamDescriptionDefault
azure_api_keyThe Azure OpenAI API key from the models deployment page.-
azure_api_versionThe API version used for the Azure OpenAI service.-
azure_deployment_nameThe name of the model deployment.Model name
endpointThe Azure OpenAI resource endpoint, e.g., https://resource-name.openai.azure.com.-
azure_entra_tokenThe Azure Entra token for authentication.-
responses_apienabled or disabled. Whether to enable invoking this model from the /v1/responses HTTP endpointdisabled
azure_openai_responses_toolsComma-separated list of OpenAI-hosted tools exposed via the Responses API for this model. These hosted tools are not available from the /v1/chat/completions HTTP endpoint. Supported tools: code_interpreter, web_search.-

Only one of azure_api_key or azure_entra_token can be provided for model configuration.

Example:

models:
- from: azure:gpt-4o-mini
name: gpt-4o-mini
params:
endpoint: ${ secrets:SPICE_AZURE_AI_ENDPOINT }
azure_api_version: 2024-08-01-preview
azure_deployment_name: gpt-4o-mini
azure_api_key: ${ secrets:SPICE_AZURE_API_KEY }

# Responses API configuration
responses_api: enabled
azure_openai_responses_tools: web_search

Refer to the Azure OpenAI Service models for more details on available models and configurations.

Follow the Azure OpenAI Models Cookbook to try Azure OpenAI models for vector-based search and chat functionalities with structured (taxi trips) and unstructured (GitHub files) data.