AWS Deployment Options
Spice.ai provides multiple deployment options on Amazon Web Services (AWS), allowing you to leverage AWS's robust infrastructure for your data and AI applications. Whether you prefer virtual machines, container orchestration, or managed services, you can deploy Spice.ai to meet your specific requirements for performance, scalability, and cost efficiency.
Benefits of Deploying on AWS​
- Scalability: Easily scale your Spice.ai applications with AWS's elastic infrastructure.
- Global Reach: Deploy across AWS's worldwide regions for low-latency access.
- Integration: Connect with other AWS services like Amazon S3, Amazon RDS, and AWS Secrets Manager.
- Cost Control: Optimize expenses with various instance types and pricing models.
- Security and Compliance: Deploy Spice.ai within your AWS security perimeter using features like VPC isolation, security groups, IAM roles to meet organizational compliance requirements.
Deployment Options​
Amazon EKS (Elastic Kubernetes Service)​
Leverage Kubernetes orchestration with Amazon EKS for containerized Spice.ai deployments.
-
Create an EKS Cluster:
- Use the AWS Management Console, AWS CLI, or eksctl to create your cluster
- Configure node groups according to your workload requirements
- (Optional) Use EKS Fargate profiles for serverless container deployment
-
Deploy Spice.ai on EKS:
- Apply Spice.ai Kubernetes manifests via Helm chart
- Configure persistent storage using Amazon EBS or Amazon EFS
- Set up ingress with the AWS Network Load Balancer (NLB)
- (Optional) Automate cluster and resource provisioning with Infrastructure as Code (IaC) tools such as AWS CloudFormation or Terraform for consistent, repeatable deployments
For comprehensive instructions and advanced configuration options, refer to the Amazon EKS User Guide, EKS Best Practices Guide, and Spice.ai Kubernetes Deployment Guide.
EC2 / AWS CloudFormation​
Deploy Spice.ai directly on Amazon EC2 instances for maximum control over the environment.
-
Manual EC2 Deployment:
- Launch an EC2 instance with your preferred Linux distribution
- Install Docker
- Run Spice.ai as a Docker Container on your EC2 instance
- (Optional) Use Infrastructure as Code (IaC) tools like AWS CloudFormation or Terraform to automate the provisioning, configuration, and management of EC2 resources for repeatable and consistent deployments
-
Automated EC2 Deployment with CloudFormation:
- Define your infrastructure in a CloudFormation template, including EC2 instances (using a Linux AMI), security groups, IAM roles, VPC, and subnets
- Use EC2
UserData
to automate Docker installation, pull the Spice.ai Docker image, retrieve configuration or secrets from AWS Parameter Store or Secrets Manager, and run the container with required environment variables - (Optional) Add parameters to your template for VPC ID, Subnet ID, KeyPair, instance type, and secret names to enable flexible deployments
- (Optional) Store sensitive data such as API keys in Parameter Store or Secrets Manager and reference them securely in
UserData
- (Optional) Deploy and manage your CloudFormation stack using the AWS Console, CLI, or CI/CD pipelines for repeatable, version-controlled infrastructure
For detailed guidance and best practices, refer to the AWS CloudFormation User Guide, EC2 User Guide for Linux Instances, and AWS Systems Manager Parameter Store Documentation.
Amazon ECS (Elastic Container Service)​
Deploy Spice.ai as containerized tasks on Amazon ECS for easy container management and flexible scaling.
-
Create an ECS Cluster:
- Choose a launch type: EC2 (manage your own EC2 instances) or Fargate (serverless).
- Create the ECS cluster using the AWS Console, CLI, or Infrastructure as Code (CloudFormation, Terraform).
-
Define a Task Definition:
- Specify the Spice.ai Docker image, resource needs, networking, environment variables, and storage in a Task Definition.
- (Optional) Use AWS Secrets Manager or Parameter Store to inject secrets securely.
- Enable logging with Amazon CloudWatch.
-
Deploy Spice.ai on ECS:
- Create an ECS Service to run and manage Spice.ai tasks.
- Set up load balancing with NLB.
- (Optional) Configure auto-scaling based on resource usage or CloudWatch metrics.
- (Optional) Use CI/CD pipelines for automated updates. Manage infrastructure with CloudFormation, Terraform, or the AWS CLI.
For more details, see the Amazon ECS Developer Guide and Spice.ai Docker Deployment Guide.
Authentication​
Most AWS services that Spice connects to have explicit parameters for configuring authentication (usually by setting an access_key_id
and secret_access_key
). If explicit credentials are not provided, Spice follows the standard AWS SDK behavior for loading credentials from the environment based on the following sources in order:
-
Environment Variables:
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
(if using temporary credentials)
-
Shared AWS Config/Credentials Files:
-
Config file:
~/.aws/config
(Linux/Mac) or%UserProfile%\.aws\config
(Windows) -
Credentials file:
~/.aws/credentials
(Linux/Mac) or%UserProfile%\.aws\credentials
(Windows) -
The
AWS_PROFILE
environment variable can be used to specify a named profile, otherwise the[default]
profile is used. -
Supports both static credentials and SSO sessions
-
Example credentials file:
# Static credentials
[default]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
# SSO profile
[profile sso-profile]
sso_start_url = https://my-sso-portal.awsapps.com/start
sso_region = us-west-2
sso_account_id = 123456789012
sso_role_name = MyRole
region = us-west-2
tipTo set up SSO authentication:
- Run
aws configure sso
to configure a new SSO profile - Use the profile by setting
AWS_PROFILE=sso-profile
- Run
aws sso login --profile sso-profile
to start a new SSO session
-
-
AWS STS Web Identity Token Credentials:
- Used primarily with OpenID Connect (OIDC) and OAuth
- Common in Kubernetes environments using IAM roles for service accounts (IRSA)
-
ECS Container Credentials:
- Used when running in Amazon ECS containers
- Automatically uses the task's IAM role
- Retrieved from the ECS credential provider endpoint
- Relies on the environment variable
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
orAWS_CONTAINER_CREDENTIALS_FULL_URI
which are automatically injected by ECS.
-
AWS EC2 Instance Metadata Service (IMDSv2):
- Used when running on EC2 instances.
- Automatically uses the instance's IAM role.
- Retrieved securely using IMDSv2.
The connector will try each source in order until valid credentials are found. If no valid credentials are found, an authentication error will be returned.
Regardless of the credential source, the IAM role or user must have appropriate permissions (e.g., s3:ListBucket
, s3:GetObject
) to access the service. If the Spicepod connects to multiple different AWS services, the permissions should cover all of them.