Spice v1.10.2 (Dec 22, 2025)
Announcing the release of Spice v1.10.2! ๐ฅ
v1.10.2 introduces Tiered Caching Acceleration with Localpod for multi-layer acceleration architectures, Periodic Acceleration Snapshots with configurable intervals, DynamoDB JSON Nesting for column consolidation, and Kafka/Debezium Batching for faster data ingestion. This release also includes fixes for SQLite accelerator decimal/date handling and real-time status reporting for the /v1/datasets and /v1/models API endpoints.
What's New in v1.10.2โ
Tiered Caching with Localpodโ
Multi-Layer Acceleration Architecture: The Localpod connector now supports caching refresh mode, enabling tiered acceleration where a persistent cache (e.g., file-mode DuckDB) feeds a fast in-memory cache (e.g., Arrow, memory-mode DuckDB).
Key Features:
- Automatic Cache Propagation: New cache entries automatically propagate from parent to child accelerators
- Warm Startup: Child accelerators initialize from existing parent data on startup, eliminating cold-start latency
- Flexible Tiering: Combine any accelerator engines (DuckDB, SQLite, Cayenne) across tiers
Example spicepod.yaml configuration:
datasets:
# Parent: persistent file-mode cache
- from: https://api.example.com
name: api_cache
acceleration:
enabled: true
refresh_mode: caching
engine: duckdb
mode: file
# Child: fast in-memory cache fed by parent
- from: localpod:api_cache
name: api_cache_memory
acceleration:
enabled: true
refresh_mode: caching
engine: arrow
mode: memory
For more details, refer to the Localpod Data Connector Documentation.
Periodic Acceleration Snapshotsโ
Configurable Snapshot Intervals: A new snapshots_create_interval parameter enables periodic snapshot creation for accelerated datasets across all refresh modes. This provides better control over snapshot frequency and ensures consistent recovery points for accelerated data.
Example spicepod.yaml configuration:
datasets:
- from: s3://my-bucket/data.parquet
name: my_data
acceleration:
enabled: true
engine: duckdb
mode: file
refresh_mode: caching
snapshots: enabled
params:
snapshots_create_interval: 60s # Write a snapshot every 60 seconds
For more details, refer to the Data Acceleration Documentation.
DynamoDB JSON Nestingโ
Consolidate Columns into JSON: The DynamoDB Data Connector now supports consolidating columns into a single JSON column using the json_object: "*" metadata option. This is useful when only a few columns are needed as discrete fields while the rest can be accessed as nested JSON.
Example spicepod.yaml configuration:
datasets:
- from: dynamodb:my_table
name: my_table
columns:
- name: PK
- name: SK
- name: data_json
metadata:
json_object: '*' # Captures all other columns as JSON
Example Output: Given a DynamoDB table with columns PK, SK, name, email, and status, the resulting table schema consolidates all non-specified columns into the data_json column:
| PK | SK | data_json |
|---|---|---|
| pk_1 | sort_1 | {"name": "Alice", "email": "[email protected]", "status": "active"} |
| pk_2 | sort_2 | {"name": "Bob", "email": "[email protected]", "status": "inactive"} |
For more details, refer to the DynamoDB JSON Nesting Documentation.
Kafka/Debezium Batchingโ
Faster Data Ingestion: Configure message batching for Kafka and Debezium connectors to improve data ingestion throughput. Batching reduces processing overhead by grouping multiple messages together before insertion.
Key Features:
- Configurable Batch Size: Control the maximum number of records per batch (default: 10,000)
- Configurable Batch Duration: Set the maximum wait time before flushing a partial batch (default: 1s)
Example spicepod.yaml configuration:
datasets:
- from: debezium:kafka-server.public.my_table
name: my_table
params:
batch_max_size: 10000 # Max records per batch (default: 10000)
batch_max_duration: 1s # Max wait time per batch (default: 1s)
For more details, refer to the Kafka Data Connector Documentation and Debezium Data Connector Documentation.
Additional Improvements & Bug Fixesโ
- Reliability: Fixed SQLite accelerator decimal and date type handling for improved data type accuracy.
- Reliability: Fixed real-time status reporting for
/v1/datasetsand/v1/modelsAPI endpoints. - Reliability: Fixed Kafka warning when
security.protocolis set toPLAINTEXT.
Contributorsโ
Breaking Changesโ
No breaking changes.
Cookbook Updatesโ
New Cayenne Data Accelerator Recipe: New recipe demonstrating how to accelerate a local copy of the taxi trips dataset using Cayenne as the data accelerator engine. See Cayenne Data Accelerator Recipe for details.
New Dataset Partitioning Recipe: New recipe demonstrating how to partition accelerated datasets to improve query performance. See Dataset Partitioning for details.
The Spice Cookbook includes 84 recipes to help you get started with Spice quickly and easily.
Upgradingโ
To upgrade to v1.10.2, use one of the following methods:
CLI:
spice upgrade
Homebrew:
brew upgrade spiceai/spiceai/spice
Docker:
Pull the spiceai/spiceai:1.10.2 image:
docker pull spiceai/spiceai:1.10.2
For available tags, see DockerHub.
Helm:
helm repo update
helm upgrade spiceai spiceai/spiceai
AWS Marketplace:
๐ Spice is now available in the AWS Marketplace!
What's Changedโ
Changelogโ
- Fix kafka warning when
security.protocolis set toPLAINTEXTby @krinart in #8587 - fix: SQLite accelerator decimal/date handling by @phillipleblanc in #8606
- feat: Enable localpod with caching mode accelerator for tiered caching by @phillipleblanc in #8621
- Remove the
clippy::too_many_lineslint by @phillipleblanc in #8549 - Add snapshot interval for acceleration snapshots by @phillipleblanc in #8627
- Json Nesting for DynamoDB by @krinart in #8623
- Implement batching for Kafka/Debezium + null Decimal handling by @krinart in #8622
- fix: Status field in /v1/datasets & /v1/models by @lukekim in #8633

