Skip to main content
Version: Next

Azure Deployment Options

Spice.ai provides multiple deployment options on Microsoft Azure, enabling data and AI applications to run on Azure's global infrastructure. Whether using virtual machines, container orchestration, or serverless containers, Spice deploys to meet requirements for performance, scalability, and cost efficiency.

For a complete list of Azure-compatible data connectors, AI models, and integrations, see Azure Integrations.

Benefits of Deploying on Azure

Deployment Options

Azure Kubernetes Service (AKS)

Run Spice.ai on Azure Kubernetes Service when the workload benefits from Kubernetes orchestration, multi-replica scale, declarative configuration, or shared cluster tenancy. AKS pairs well with the Spice Helm chart and the Argo CD or Flux GitOps workflows.

1. Provision the cluster

Provision an AKS cluster with workload identity and the OIDC issuer enabled — both are required for federated credentials to Azure services.

RG=spiceai-rg
CLUSTER=spiceai-prod
LOCATION=eastus

az group create --name $RG --location $LOCATION

az aks create \
--resource-group $RG \
--name $CLUSTER \
--location $LOCATION \
--kubernetes-version 1.31 \
--node-count 3 \
--node-vm-size Standard_D4s_v5 \
--enable-cluster-autoscaler --min-count 2 --max-count 6 \
--enable-oidc-issuer \
--enable-workload-identity \
--network-plugin azure \
--generate-ssh-keys

az aks get-credentials --resource-group $RG --name $CLUSTER

For burst or low-utilization workloads, attach virtual nodes backed by Azure Container Instances. For production, prefer Bicep or Terraform for repeatable provisioning — the Azure Verified Modules library publishes a maintained AKS module.

2. Configure workload identity for Azure access

Most Spice connectors (ABFS, Azure SQL, Key Vault, Azure OpenAI) accept Azure credentials from the environment. Use workload identity so pods receive scoped, federated tokens without static secrets:

# 1. Create a user-assigned managed identity
az identity create --resource-group $RG --name spiceai-identity
CLIENT_ID=$(az identity show -g $RG -n spiceai-identity --query clientId -o tsv)
PRINCIPAL_ID=$(az identity show -g $RG -n spiceai-identity --query principalId -o tsv)

# 2. Grant the identity access to Azure resources the Spicepod needs
az role assignment create \
--assignee-object-id $PRINCIPAL_ID --assignee-principal-type ServicePrincipal \
--role "Storage Blob Data Reader" \
--scope /subscriptions/<sub>/resourceGroups/$RG/providers/Microsoft.Storage/storageAccounts/<acct>

# 3. Federate the identity with the Kubernetes ServiceAccount
ISSUER=$(az aks show -g $RG -n $CLUSTER --query oidcIssuerProfile.issuerUrl -o tsv)
az identity federated-credential create \
--name spiceai-fed \
--identity-name spiceai-identity \
--resource-group $RG \
--issuer "$ISSUER" \
--subject system:serviceaccount:spiceai:spiceai \
--audiences api://AzureADTokenExchange

Reference the identity from the Helm release so Spice pods inherit federated tokens via the DefaultAzureCredential chain:

# values.yaml
serviceAccount:
create: true
name: spiceai
annotations:
azure.workload.identity/client-id: "<CLIENT_ID>"
podLabels:
azure.workload.identity/use: "true"

3. Install Spice.ai

helm repo add spiceai https://helm.spiceai.org
helm repo update

helm upgrade --install spiceai spiceai/spiceai \
--namespace spiceai --create-namespace \
--version 1.11.5 \
-f values.yaml

For declarative GitOps, swap this command for an Argo CD Application or a Flux HelmRelease pointing at the same chart. See the Argo CD or Flux guides for full manifests.

4. Storage and ingress

For stateful acceleration (DuckDB, SQLite, Cayenne):

  • Local NVMe (recommended) — Spice acceleration is latency- and IOPS-sensitive, so the lowest-latency option is a node-local NVMe SSD on an instance family with attached NVMe (Lsv3 / Lasv3, Ddsv5 / Ddsv6, Edsv5 / Edsv6). Expose the local NVMe through the Local Volume Static Provisioner as a local-storage StorageClass. Local NVMe does not survive node replacement, so pair with a refresh strategy or a re-hydration source.
  • Premium SSD v2 — when shared / replica-attachable persistence is required, Premium SSD v2 delivers up to 80,000 IOPS and sub-millisecond latency with independently configurable IOPS and throughput. Use the Azure Disks CSI driver with a custom StorageClass (skuName: PremiumV2_LRS).
  • Premium SSD (managed-csi-premium) — use the built-in managed-csi-premium storage class only when Premium SSD v2 is unavailable in a region.
  • Azure Files (azurefile-csi) — not recommended for acceleration — use only for stateless shared artefacts that need ReadWriteMany. SMB/NFS latency negates the benefit of using a local accelerator.
  • Set stateful.enabled: true and stateful.storageClass: <chosen-class> in values.yaml.
Spice.ai Enterprise

For production stateful workloads, the Spice.ai Enterprise Operator's SpicepodSet provides per-replica StatefulSets with automatic PVC resizing, workload-identity-aware ServiceAccount annotations, and configurable update strategies. For distributed query execution across scheduler/executor tiers backed by Azure Blob Storage, see SpicepodCluster.

To expose Spice externally, install the Application Gateway Ingress Controller (AGIC) or use a Standard public Load Balancer:

# values.yaml
service:
type: LoadBalancer
additionalAnnotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true" # internal only

For internal-only deployments, set azure-load-balancer-internal: "true" to bind to the cluster's VNet rather than a public IP.

5. Observability

The Spice Helm chart ships a PodMonitor resource for the Prometheus Operator. On AKS, the Azure Monitor managed service for Prometheus and Container insights are the common targets. Set monitoring.podMonitor.enabled: true and import the Spice Grafana dashboard into Azure Managed Grafana.

For comprehensive guidance, refer to the Azure Kubernetes Service documentation, the AKS baseline architecture, and the Spice.ai Kubernetes Deployment Guide.

Azure Container Apps

Azure Container Apps is a serverless container platform suitable for HTTP-driven Spice.ai workloads that benefit from scale-to-zero and request-based autoscaling. Use it when a single managed container is sufficient and operating Kubernetes is not desired.

1. Create the environment

The environment is the security and networking boundary that hosts one or more container apps:

RG=spiceai-rg
ENV=spiceai-env
LOCATION=eastus

az group create --name $RG --location $LOCATION

az containerapp env create \
--name $ENV \
--resource-group $RG \
--location $LOCATION \
--logs-destination log-analytics

To reach Azure SQL, Storage, or Key Vault behind private endpoints, attach the environment to a VNet-injected subnet by adding --infrastructure-subnet-resource-id and --internal-only true.

2. Configure managed identity

Container Apps support both system-assigned and user-assigned managed identities. A user-assigned identity is preferred so role assignments survive app recreation:

az identity create --resource-group $RG --name spiceai-identity
IDENTITY_ID=$(az identity show -g $RG -n spiceai-identity --query id -o tsv)
PRINCIPAL_ID=$(az identity show -g $RG -n spiceai-identity --query principalId -o tsv)

# Grant access to the resources the Spicepod connects to
az role assignment create \
--assignee-object-id $PRINCIPAL_ID --assignee-principal-type ServicePrincipal \
--role "Storage Blob Data Reader" \
--scope /subscriptions/<sub>/resourceGroups/$RG/providers/Microsoft.Storage/storageAccounts/<acct>

3. Deploy Spice.ai

Mount Azure Files for stateful acceleration, inject secrets from Key Vault, and configure HTTP ingress on port 8090:

az containerapp create \
--name spiceai \
--resource-group $RG \
--environment $ENV \
--image spiceai/spiceai:1.11.5-models \
--target-port 8090 \
--ingress external \
--transport http \
--user-assigned $IDENTITY_ID \
--min-replicas 1 --max-replicas 5 \
--cpu 1.0 --memory 2.0Gi \
--env-vars \
SPICED_LOG=INFO \
AZURE_CLIENT_ID=$(az identity show -g $RG -n spiceai-identity --query clientId -o tsv) \
--secrets spiceai-key=keyvaultref:https://my-vault.vault.azure.net/secrets/spiceai-key,identityref:$IDENTITY_ID \
--secret-volume-mount /mnt/secrets

To run multiple replicas with shared file-based acceleration, define an Azure Files storage mount and reference it from the app's revision template, then point file accelerators at the mount path (for example, duckdb_file: /data/taxi_trips.db).

4. Scaling rules

Spice.ai is HTTP-driven, so the default HTTP scale rule (concurrent requests per replica) is usually sufficient. For background workloads (refresh schedules, ingestion) that should not scale to zero, set --min-replicas 1 and add a custom scale rule backed by a CPU or queue metric:

az containerapp update \
--name spiceai --resource-group $RG \
--scale-rule-name http-rule \
--scale-rule-type http \
--scale-rule-http-concurrency 50

5. Health probes and revisions

Configure the liveness and readiness probes to use /health and /v1/ready. Container Apps creates a new revision on each update, supporting traffic splitting between revisions for canary upgrades:

az containerapp revision set-mode --name spiceai --resource-group $RG --mode multiple
az containerapp ingress traffic set --name spiceai --resource-group $RG \
--revision-weight latest=90 spiceai--prev=10

For more details, see the Azure Container Apps documentation and the Spice.ai Docker Deployment Guide.

Azure Container Instances (ACI)

Run Spice.ai as a single container without provisioning a cluster using Azure Container Instances. Suitable for development environments, scheduled jobs, and low-traffic deployments.

az container create \
--resource-group my-rg \
--name spiceai \
--image spiceai/spiceai:latest \
--cpu 2 --memory 4 \
--ports 8090 50051 \
--ip-address public \
--environment-variables SPICED_LOG=INFO \
--azure-file-volume-share-name spice-data \
--azure-file-volume-account-name mystorageacct \
--azure-file-volume-account-key "<key>" \
--azure-file-volume-mount-path /data

Refer to the Azure Container Instances documentation for advanced networking, virtual network integration, and managed identity configuration.

Azure Virtual Machines

Deploy Spice.ai directly on Azure Virtual Machines for maximum control over the environment, GPU access, or large-memory instance types.

  1. Manual VM deployment:

  2. Automated deployment with Bicep or Terraform:

For detailed guidance, refer to the Linux on Azure documentation, Bicep documentation, and Azure provider for Terraform.

Authentication

Most Azure services that Spice connects to accept explicit credentials through component parameters (for example, an azure_storage_account_key on the ABFS connector). When explicit credentials are not provided, Spice follows the standard Azure Identity DefaultAzureCredential chain, attempting credentials in this order:

  1. Environment variables:

    • AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_CLIENT_SECRET (service principal with secret)
    • AZURE_CLIENT_CERTIFICATE_PATH (service principal with certificate)
    • AZURE_USERNAME, AZURE_PASSWORD (resource owner password — not recommended)
  2. Workload Identity (AKS): federated tokens injected via AZURE_FEDERATED_TOKEN_FILE, AZURE_AUTHORITY_HOST, AZURE_CLIENT_ID, and AZURE_TENANT_ID. See Workload Identity for AKS.

  3. Managed identity: System-assigned or user-assigned identities on Azure VMs, AKS, Container Apps, and ACI. See What are managed identities?.

  4. Azure CLI: Cached credentials from a local az login session. Common during development.

  5. Azure Developer CLI (azd) and Azure PowerShell: Used when the corresponding CLI is signed in.

For services with explicit parameters (Blob Storage, Azure SQL, Cosmos DB, OpenAI), prefer named credentials or managed identity over environment variables in production.

Role assignments

Regardless of the credential source, the principal must have the appropriate Azure role assignments (for example, Storage Blob Data Reader on a storage account, or SQL DB Contributor on Azure SQL). When a Spicepod connects to multiple Azure services, the principal must have permissions across all of them.

Resources

Documentation

Azure Marketplace

Spice.ai is not yet published to the Microsoft Azure Marketplace (coming soon). In the meantime, deploy using the spiceai/spiceai container image or the Spice Helm chart.