Spice Runtime Distributions
The Spice open source project provides multiple distribution variants to support different use cases and deployment scenarios.
The Spice runtime is 64-bit only. 32-bit platforms are not supported.
Image Channels
| Channel | Image | Description |
|---|---|---|
| DockerHub | spiceai/spiceai | Official release images |
| GitHub Container Registry | ghcr.io/spiceai/spiceai | Official release images |
| GitHub Container Registry (Nightly) | ghcr.io/spiceai/spiceai-nightly | Nightly builds with additional variants |
| AWS Marketplace | — | Enterprise image |
| Azure Marketplace | — | Enterprise image (coming soon) |
| Spice Cloud Platform | — | Uses Enterprise image |
| Spice.ai Enterprise | — | Uses Enterprise image |
Some variant distributions are only available in nightly images (data) or exclusively through the Spice Cloud Platform and Spice.ai Enterprise (NAS, CUDA, allocator variants).
Supported Platforms and Hardware Requirements
| Platform | Architecture | Minimum CPU Features | Build Prerequisites |
|---|---|---|---|
| Linux | x86_64 | AVX2, FMA, BMI1/2, LZCNT, POPCNT | — |
| Linux | aarch64 (arm64) | NEON, FP16 (FEAT_FP16), FHM (FEAT_FHM) | clang, lld |
| macOS | aarch64 (Apple Silicon) | Native (build host) | — |
| Windows | x86_64 (MSVC) | — | MSVC toolchain |
Windows support is CLI (spice) only. The runtime daemon (spiced) is not supported on Windows natively — use WSL instead.
Distribution Availability
| Distribution / Variant | Image Tag | Open Source | Spice Cloud | Enterprise |
|---|---|---|---|---|
| Default (Data + AI) | latest | ✅ | ✅ | ✅ |
| Data-only | latest-data | Nightly only | ✅ | ✅ |
| NAS (SMB + NFS) | — | Local build only | ❌ | ✅ |
| Metal (macOS) | — | Local build only | ✅ | ✅ |
| CUDA (Linux) | latest-cuda | Local build only | ✅ | ✅ |
| Allocator variants | latest-{jemalloc,mimalloc,sysalloc} | Local build only | ✅ | ✅ |
| ODBC connector | — | Local build only | ✅ | ✅ |
Default Distribution
The default distribution includes all features including AI/ML model support. This is the recommended distribution for most users.
Included Features:
- All standard data connectors (PostgreSQL, MySQL, DuckDB, SQLite, ClickHouse, etc.)
- Embedded data accelerators (Spice Cayenne, DuckDB, SQLite)
- AI/ML model inference (LLMs, embeddings)
- Search capabilities (Vector and BM-25 Full-Text-Search)
- Default memory allocator (snmalloc)
The PostgreSQL data accelerator is only available in nightly builds. The PostgreSQL data connector is included in all distributions.
Installation:
curl https://install.spiceai.org | /bin/bash
Docker:
docker pull ghcr.io/spiceai/spiceai:latest
# or
docker pull spiceai/spiceai:latest
Data Distribution
The data distribution excludes AI/ML model support, resulting in a smaller binary size and reduced attack surface. Use this when data federation and acceleration capabilities are needed without AI features.
Open Source: Available in nightly builds only. Cloud Platform & Enterprise: Production-ready data distribution available.
Included Features:
- All data connectors
- All data accelerators
- Default memory allocator (snmalloc)
Excluded Features:
- AI/ML model inference
- LLM support
- Embedding models
Docker (Nightly):
docker pull ghcr.io/spiceai/spiceai-nightly:latest-data
Local Build:
make install-data-only
GPU-Accelerated Distributions
Metal (macOS)
For macOS systems with Apple Silicon, the Metal distribution enables GPU-accelerated AI/ML inference.
Included Features:
- All default features
- Metal GPU acceleration for model inference
Local Build:
make install-metal
CUDA (Linux)
For Linux systems with NVIDIA GPUs, CUDA distributions enable GPU-accelerated AI/ML inference. Multiple CUDA compute capability versions are available.
CUDA distributions are available with the Spice Cloud Platform and Spice.ai Enterprise. Open source users can build locally for development and testing.
Included Features:
- All default features
- CUDA GPU acceleration for model inference
Supported Compute Capabilities:
- 80 (A100, A30)
- 86 (RTX 30xx, A40, A10)
- 87 (Jetson Orin)
- 89 (RTX 40xx, L40, L4)
- 90 (H100, H200)
Local Build:
CUDA_COMPUTE_CAP=89 make install-cuda
NAS Distribution
The NAS (Network Attached Storage) distribution adds support for SMB and NFS data connectors, enabling federated queries against data stored on network file shares.
The NAS distribution is available with Spice.ai Enterprise. Open source users can build locally for development and testing.
Included Features:
- All default features
- SMB data connector
- NFS data connector
Local Build:
make install-nas
Allocator Variants
Different memory allocators can significantly impact performance depending on workload characteristics.
Allocator variants are available with the Spice Cloud Platform and Spice.ai Enterprise. Open source users can build locally for development and testing.
snmalloc (Default)
The default allocator, optimized for concurrent workloads.
jemalloc
Alternative allocator that may perform better for certain memory allocation patterns.
mimalloc
Microsoft's mimalloc allocator, designed for performance and security.
System Allocator
Uses the system's default allocator (glibc malloc on Linux).
Platform Support
| Platform | Default | Data | NAS | Metal | CUDA |
|---|---|---|---|---|---|
| Linux x86_64 | ✅ | Nightly | Enterprise only | ❌ | Cloud/Enterprise |
| Linux aarch64 | ✅ | Nightly | Enterprise only | ❌ | ❌ |
| macOS aarch64 (Apple Silicon) | ✅ | Nightly | Enterprise only | ✅ | ❌ |
| Windows (WSL) | ✅ | Nightly | Enterprise only | ❌ | Cloud/Enterprise |
| Windows (Native) | ❌ | Enterprise only | Enterprise only | ❌ | Enterprise only |
Native Windows support for the Spice runtime is available with the Spice Cloud Platform and Spice.ai Enterprise. Open source users on Windows should use Windows Subsystem for Linux (WSL).
Choosing a Distribution
| Use Case | Recommended Distribution |
|---|---|
| General purpose with AI capabilities | Default |
| Data federation only, minimal footprint | Data (nightly) |
| Network attached storage (SMB/NFS) | NAS |
| macOS with GPU acceleration | Metal |
| Linux with NVIDIA GPU | CUDA |
| Memory allocation benchmarking | Allocator variants |
Additional Connectors
Some connectors require additional dependencies and are available with the Spice Cloud Platform and Spice.ai Enterprise:
- ODBC - Connect to any ODBC-compatible data source
These can be built locally for development and testing:
make install-odbc
Platform-Specific Notes
Linux arm64
- FP16 (FEAT_FP16) is required because the
gemmmatrix multiplication library (used by the Candle ML framework) contains half-precision ARM inline assembly that requires thefullfp16CPU feature. This is supported on AWS Graviton2+, Ampere Altra, Apple M-series (via Linux VM), and most ARMv8.2-A+ processors. - lld is required as the linker because the spiced debug binary is large enough to exceed GNU ld's ±128 MiB branch range for
R_AARCH64_CALL26relocations. lld automatically inserts range extension thunks. - Install prerequisites on Ubuntu/Debian:
sudo apt-get install -y clang lld
Linux x86_64
- Release builds target AVX2+ for optimized SIMD performance, covering Intel Haswell (2013+) and AMD Excavator (2015+) processors, including all current AWS x86_64 instance families (C6/C7/C8).
Building Custom Distributions
Custom distributions with specific feature combinations can be built:
# Build with specific features
SPICED_CUSTOM_FEATURES="duckdb,postgres,sqlite,models" make build-runtime
# Build with non-default features added to defaults
SPICED_NON_DEFAULT_FEATURES="odbc" make install
See the project Makefile for all available build targets and options.
