Data Accelerators
Data sourced by Data Connectors can be locally materialized and accelerated using a Data Accelerator.
A Data Accelerator queries/fetches data from a connected data source and stores/updates it locally in an embedded acceleration engine, such as Spice Cayenne, DuckDB, or SQLite. To set data refresh behavior, such as refreshing data on an interval, see Data Refresh.
Dataset acceleration is enabled by setting the acceleration configuration:
datasets:
- name: accelerated_dataset
acceleration:
enabled: true
For the complete reference specification, see datasets.
By default, datasets are locally materialized using in-memory Arrow records.
Supported Data Accelerators
| Name | Description | Status | Engine Modes |
|---|---|---|---|
arrow | In-Memory Arrow Records | Stable | memory |
cayenne | Spice Cayenne | Alpha (v1.9.0-rc.1+) | file |
duckdb | Embedded DuckDB | Stable | memory, file |
postgres | Attached PostgreSQL | Release Candidate | N/A |
sqlite | Embedded SQLite | Release Candidate | memory, file |
turso | Embedded Turso | Beta | memory, file |
Choosing an Accelerator
Select the appropriate accelerator based on dataset size, query patterns, and resource constraints:
| Use Case | Recommended Accelerator | Rationale |
|---|---|---|
| Small datasets (under 1 GB), maximum speed | arrow | In-memory storage provides lowest latency |
| Medium datasets (1-100 GB), complex SQL | duckdb | Mature SQL support with memory management |
| Large datasets (100 GB - 1+ TB), scalable analytics | cayenne | Vortex columnar format scales beyond single-file limits |
| Point lookups on large datasets | cayenne | Vortex provides 100x faster random access vs Parquet |
| Simple queries, low resource usage | sqlite | Lightweight, minimal overhead |
| Async operations, concurrent workloads | turso | Native async support, modern connection pooling |
| External database integration | postgres | Leverage existing PostgreSQL infrastructure |
Spice Cayenne vs DuckDB
Both Spice Cayenne and DuckDB support file-based acceleration, but differ in architecture and performance characteristics:
Choose Spice Cayenne when:
- Datasets exceed ~1 TB
- Multi-file data ingestion is required (e.g., partitioned S3 data)
- Lower memory overhead is preferred
- Workloads benefit from Vortex's 10-20x faster scans
- Point lookups and random access patterns are common (100x faster than Parquet)
Choose DuckDB when:
- Datasets are under ~1 TB
- Complex SQL features are required (window functions, CTEs)
- Existing DuckDB tooling integration is beneficial
- Explicit index control is required
Data Types
Data Accelerators may not support all possible Apache Arrow data types. For complete compatibility, see specifications.
When accelerating a dataset using mode: memory (the default), some or all of the dataset is loaded into memory. Ensure sufficient memory is available, including overhead for queries and the runtime, especially with concurrent queries.
In-memory limitations can be mitigated by storing acceleration data on disk, which is supported by duckdb, sqlite, and turso accelerators by specifying mode: file.
Data Accelerator Docs
📄️ Spice Cayenne Data Accelerator
Spice Cayenne Data Accelerator (Vortex) Documentation
📄️ In-Memory Arrow Data Accelerator
In-Memory Arrow Data Accelerator Documentation
📄️ DuckDB Data Accelerator
DuckDB Data Accelerator Documentation
📄️ SQLite Data Accelerator
SQLite Data Accelerator Documentation
📄️ PostgreSQL Data Accelerator
PostgreSQL Data Accelerator Documentation
📄️ Turso Data Accelerator
Turso (libSQL) Data Accelerator Documentation
Related Documentation
- Performance Tuning - Comprehensive optimization guide
- Managing Memory Usage - Memory configuration reference
- Data Refresh - Refresh mode configuration
- Indexes - Index configuration for DuckDB, SQLite, and Turso
