Skip to main content
Sergei Grebnov
Senior Software Engineer at Spice AI
View all authors

Spice v1.9.0-rc.2 (Nov 11, 2025)

Β· 32 min read
Sergei Grebnov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.9.0-rc.2! 🌢

This is the second release candidate for v1.9.0, which introduces Spice Cayenne, a new high-performance data accelerator built on the Vortex columnar format that delivers better than DuckDB performance without single-file scaling limitations and a preview of Multi-Node Distributed Query based on Apache Ballista. v1.9.0-rc.2 also upgrades to DataFusion v50 and DuckDB v1.4.1 for even higher query performance, expands search capabilities with full-text search on views and multi-column embeddings, includes significant DynamoDB and DuckDB accelerator improvements, expands the HTTP data connector to support endpoints as tables, and delivers many security and reliability improvements.

What's New in v1.9.0-rc.2​

Cayenne Data Accelerator (Beta)​

Introducing Cayenne: SQL as an Acceleration Format: A new high-performance Data Accelerator that simplifies multi-file data acceleration by using an embedded database (SQLite) for metadata while storing data in the Vortex columnar format, a Linux Foundation project. Cayenne delivers query and ingestion performance better than DuckDB's file-based acceleration without DuckDB's memory overhead and the scaling challenges of single DuckDB files.

Cayenne uses SQLite to manage acceleration metadata (schemas, snapshots, statistics, file tracking) through simple SQL transactions, while storing data in Vortex's compressed columnar format. This architecture provides:

Key Features:

  • SQLite + Vortex Architecture: All metadata is stored in SQLite tables with standard SQL transactions, while data lives in Vortex's compressed, chunked columnar format designed for zero-copy access and efficient scanning.
  • Simplified Operations: No complex file hierarchies, no JSON/Avro metadata files, no separate catalog serversβ€”just SQL tables and Vortex data files. The entire metadata schema is intentionally simple for maximum reliability.
  • Fast Metadata Access: Single SQL query retrieves all metadata needed for query planningβ€”no multiple round trips to storage, no S3 throttling, no reconstruction of metadata state from scattered files.
  • Efficient Small Changes: Dramatically reduces small file proliferation. Snapshots are just rows in SQLite tables, not new files on disk. Supports millions of snapshots without performance degradation.
  • High Concurrency: Changes consist of two steps: stage Vortex files (if any), then run a single SQL transaction. Much faster conflict resolution and support for many more concurrent updates than file-based formats.
  • Advanced Data Lifecycle: Full ACID transactions, delete support, and retention SQL execution on refresh commit.

Example Spicepod.yml configuration:

datasets:
- from: s3:my_table
name: accelerated_data_30d
acceleration:
enabled: true
engine: cayenne
mode: file
refresh_mode: append
retention_sql: DELETE FROM accelerated_data WHERE created_at < NOW() - INTERVAL '30 days'

Note, the Cayenne Data Accelerator is in Beta with limitations.

For more details, refer to the Cayenne Documentation, the Vortex project, and the DuckLake announcement that partly inspired this design.

Multi-Node Distributed Query (Preview)​

Apache Ballista Integration: Spice now supports distributed query execution based on Apache Ballista, enabling distributed queries across multiple executor nodes for improved performance on large datasets. This feature is in preview in v1.9.0-rc.2.

Architecture:

A distributed Spice cluster consists of:

  • Scheduler: Responsible for distributed query planning and work queue management for the executor fleet
  • Executors: One or more nodes responsible for running physical query plans

Getting Started:

Start a scheduler instance using an existing Spicepod. The scheduler is the only spiced instance that needs to be configured:

# Start scheduler (note the flight bind address override if you want it reachable outside localhost)
spiced --cluster-mode scheduler --flight 0.0.0.0:50051

Start one or more executors configured with the scheduler's flight URI:

# Start executor (automatically selects a free port if 50051 is taken)
spiced --cluster-mode executor --scheduler-url spiced://localhost:50051

Query Execution:

Queries run through the scheduler will now show a distributed_plan in EXPLAIN output, demonstrating how the query is distributed across executor nodes:

EXPLAIN SELECT count(id) FROM my_dataset;

Current Limitations:

  • Accelerated datasets are currently not supported. This feature is designed for querying partitioned data lake formats (Parquet, Delta Lake, Iceberg, etc.)
  • The feature is in preview and may have stability or performance limitations
  • Specific acceleration support is planned for future releases

DataFusion v50 Upgrade​

Spice.ai is built on the Apache DataFusion query engine. The v50 release brings significant performance improvements and enhanced reliability:

Performance Improvements πŸš€:

  • Dynamic Filter Pushdown: Enhanced dynamic filter pushdown for custom ExecutionPlans, ensuring filters propagate correctly through all physical operators for improved query performance.

  • Partition Pruning: Expanded partition pruning support ensures that unnecessary partitions are skipped when filters are not used, reducing data scanning overhead and improving query execution times.

Apache Spark Compatible Functions: Added support for Spark-compatible functions including array, bit_get/bit_count, bitmap_count, crc32/sha1, date_add/date_sub, if, last_day, like/ilike, luhn_check, mod/pmod, next_day, parse_url, rint, and width_bucket.

Bug Fixes & Reliability: Resolved issues with partition name validation and empty execution plans when vector index lists are empty. Fixed timestamp support for partition expressions, enabling better partitioning for time-series data.

See the Apache DataFusion 50.0.0 Release for more details.

DuckDB v1.4.1 Upgrade and Accelerator Improvements​

DuckDB v1.4.1: DuckDB has been upgraded to v1.4.1, which includes several performance optimizations.

Composite ART Index Support: DuckDB in Spice now supports composite (multi-column) Adaptive Radix Tree (ART) indexes for accelerated table scans. When queries filter on multiple columns fully covered by a composite index, the optimizer automatically uses index scans instead of full table scans, delivering significant performance improvements for selective queries.

Example configuration:

datasets:
- from: file://data.parquet
name: sales
acceleration:
enabled: true
engine: duckdb
indexes:
'(region, product_id)': enabled

Performance example with composite index on 7.5M rows:

SELECT * FROM sales WHERE region = 'US' AND product_id = 12345;

-- Without index: 0.282s
-- With composite index (region, product_id): 0.037s
-- Performance improvement: 7.6x faster with composite index

DuckDB Intermediate Materialization: Queries with indexes now use intermediate materialization (WITH ... AS MATERIALIZED) to leverage faster index scans. Currently supported for non-federated queries (query_federation: disabled) against a single table with indexes only. When predicates cover more columns than the index, the optimizer rewrites queries to first materialize index-filtered results, then apply remaining predicates. This optimization can deliver significant performance improvements for selective queries.

Example configuration:

datasets:
- from: file://sales_data.parquet
name: sales
acceleration:
enabled: true
engine: duckdb
mode: file
params:
query_federation: disabled # Required currently for intermediate materialization
indexes:
'(region, product_id)': enabled

Performance example:

-- Query with indexed columns (region, product_id) plus additional filter (amount)
SELECT * FROM sales
WHERE region = 'US' AND product_id = 12345 AND amount > 1000;

-- Optimized execution time: 0.031s (with intermediate materialization)
-- Standard execution time: 0.108s (without optimization)
-- Performance improvement: ~3.5x faster

The optimizer automatically rewrites the query to:

WITH _intermediate_materialize AS MATERIALIZED (
SELECT * FROM sales WHERE region = 'US' AND product_id = 12345
)
SELECT * FROM _intermediate_materialize WHERE amount > 1000;

Parquet Buffering for Partitioned Writes: DuckDB partitioned writes in table mode now support Parquet buffering, reducing memory usage and improving write performance for large datasets.

Retention SQL on Refresh Commit: DuckDB accelerations now support running retention SQL on refresh commit, enabling automatic data cleanup and lifecycle management during refresh operations.

UTC Timezone for DuckDB: DuckDB now uses UTC as the default timezone, ensuring consistent behavior for time-based queries across different environments.

Example Spicepod.yml configuration:

datasets:
- from: s3://my_bucket/large_table/
name: partitioned_data
acceleration:
enabled: true
engine: duckdb
mode: file
retention:
sql: DELETE FROM partitioned_data WHERE event_time < NOW() - INTERVAL '7 days'

HTTP Data Connector​

  • Querying endpoints as tables: The HTTP/HTTPS Data Connectors now supports querying HTTP endpoints directly as tables in SQL queries with dynamic filters. This feature transforms REST APIs into queryable data sources, making it easy to integrate external service data.

  • Query HTTP endpoint that returns structured data (JSON, CSV, etc.) as if it were a database table

  • Configurable retry logic, timeouts, and POST request support for more complex API interactions

Example Spicepod.yml configuration:

datasets:
- from: https://api.tvmaze.com
name: tvmaze
params:
file_format: json
max_retries: 3
client_timeout: 10s

Example SQL query:

SELECT request_path, request_query, content
FROM tvmaze
WHERE request_path = '/search/people' and request_query = 'q=michael'
LIMIT 10;

If a request_body is supplied it will be posted to the endpoint:

Example SQL query:

SELECT request_path, request_query, content
FROM tvmaze
WHERE request_path = '/search/people' and request_query = 'q=michael' and request_body = '{"name": "michael"}'
LIMIT 10;

HTTP endpoints can be accelerated using refresh_sql:

datasets:
- from: https://api.tvmaze.com
name: tvmaze
acceleration:
enabled: true
refresh_mode: full
refresh_sql: |
SELECT request_path, request_query, content
FROM tvmaze
request_path = '/search/people'
AND request_query IN ('q=michael', 'q=luke')

DynamoDB Data Connector Improvements​

Improved Query Performance: The DynamoDB Data Connector now includes improved filter handling for edge cases, parallel scan support for faster data ingestion, and better error handling for misconfigured queries. These improvements enable more reliable and performant access to DynamoDB data.

Example Spicepod.yml configuration:

datasets:
- from: dynamodb:my_table
name: ddb_data
params:
scan_segments: 10 # Default `auto` which calculates optimal segments based on number of rows

S3 Versioning Support​

Atomic Range Reads for Versioned Files: Spice now supports S3 Versioning for all connectors using object-store (S3, Delta Lake, etc.), ensuring range reads over versioned files are atomically correct. When S3 versioning is enabled, Spice automatically tracks version IDs during file discovery and uses them for all subsequent range reads, preventing inconsistencies from concurrent file modifications.

Current limitations:

  • Multi-file connections (e.g., partitioned datasets) do not yet support version tracking across all files
  • Version tracking is automatic when S3 versioning is enabled on the bucket

Search & Embeddings Enhancements​

Full-Text Search on Views: Full-text search indexes are now supported on views, enabling advanced search scenarios over pre-aggregated or transformed data. This extends the power of Spice's search capabilities beyond base datasets.

Multi-Column Embeddings on Views: Views now support embedding columns, enabling vector search and semantic retrieval on view data. This is useful for search over aggregated or joined datasets.

Vector Engines on Views: Vector search engines are now available for views, enabling similarity search over complex queries and transformations.

Example Spicepod.yml configuration:

views:
- name: aggregated_reviews
sql: SELECT review_id, review_text FROM reviews WHERE rating > 4
embeddings:
- column: review_text
model: openai:text-embedding-3-small

Dedicated Query Thread Pool (Now Enabled by Default)​

Dedicated Query Thread Pool: Query execution and accelerated refreshes now run on their own dedicated thread pool, separate from the HTTP server. This prevents heavy query workloads from slowing down API responses, keeping health checks fast and avoiding unnecessary Kubernetes pod restarts under load.

This feature was opt-in in previous releases and is now enabled by default in v1.9.0-rc.2. To disable it and revert to the previous behavior, add the following spicepod.yaml configuration:

runtime:
params:
dedicated_thread_pool: none

Query Performance Optimizations​

Stale-While-Revalidate Cache Control: Query results now support "stale-while-revalidate" cache control, allowing stale cached data to be served immediately while asynchronously refreshing the cache entry in the background. This improves response times for frequently-accessed queries while maintaining data freshness. Requires cache key type to be set to "sql (raw)" for proper operation.

Optimized Prepared Statements: Prepared statement handling has been optimized for better performance with parameterized queries, reducing planning overhead and improving execution time for repeated queries.

Large RecordBatch Chunking: Large Arrow RecordBatch objects are now automatically chunked to control memory usage during query execution, preventing memory exhaustion for queries returning large result sets.

Query Result Cache: Stale-While-Revalidate​

HTTP Cache-Control Support: The query result cache now supports the stale-while-revalidate Cache-Control directive, enabling faster response times by serving stale cached results immediately while asynchronously refreshing the cache in the background. This feature is particularly useful for applications that can tolerate slightly stale data in exchange for improved performance.

How it works:

When a cache entry is stale but within the stale-while-revalidate window, Spice will:

  1. Immediately return the stale cached result to the client
  2. Asynchronously re-execute the query in the background to refresh the cache
  3. Future requests will use the refreshed data

Configuration:

Use the Cache-Control HTTP header with the stale-while-revalidate directive:

Cache-Control: max-age=300, stale-while-revalidate=60

This configuration caches results for 5 minutes (300 seconds), and allows serving stale results for an additional 60 seconds while refreshing in the background.

Requirements:

  • Must use plan or raw SQL cache keys (set cache_key_type to sql or plan in results_caching configuration)
  • Background revalidation re-executes queries through the normal query path
  • Timestamp tracking automatically determines cache entry age for staleness checks

Example configuration via HTTP header:

GET /v1/sql
Cache-Control: max-age=600, stale-while-revalidate=120
X-Cache-Key-Type: sql

This feature improves application responsiveness while ensuring data freshness through background updates.

Security & Reliability Improvements​

Enhanced HTTP Client Security: HTTP client usage across the runtime has been hardened with improved TLS validation, certificate pinning for critical endpoints, and better error handling for network failures.

ODBC Connector Improvements: Removed unwrap calls from the ODBC connector, improving error handling and reliability. Fixed secret handling and Kubernetes secret integration.

CLI Permissions Hardening: Tightened file permissions for the CLI and install script, ensuring secure defaults for configuration files and credentials.

Oracle Instant Client Pinning: Oracle Instant Client downloads are now pinned to specific SHAs, ensuring reproducible builds and preventing supply chain attacks.

AWS Authentication Improvements​

Improved Credential Retry Logic: AWS SDK credential initialization has been significantly improved with more robust retry logic and better error handling. The system now automatically retries transient credential resolution failures using Fibonacci backoff, allowing Spice to tolerate extended AWS outages (up to ~48 hours) without manual intervention.

Key features:

  • Automatic retry with backoff: Implements Fibonacci backoff for transient credential failures (network issues, temporary AWS service disruptions)
  • Configurable retry limits: Supports up to 300 retry attempts with a maximum retry interval of 600 seconds
  • Better error handling: Distinguishes between retryable errors (connector errors) and non-retryable errors (misconfiguration)
  • Unauthenticated access support: Properly supports unauthenticated access to public S3 buckets without requiring credentials
  • Improved error messages: Provides detailed logging with attempt numbers, retry intervals, and error context for better troubleshooting

The improvements ensure more reliable AWS service integration, particularly in environments with intermittent network connectivity or during AWS service degradations.

Observability & Tracing​

DataFusion Log Emission: The Spice runtime now emits DataFusion internal logs, providing deeper visibility into query planning and execution for debugging and performance analysis.

AI Completions Tracing: Fixed tracing so that ai_completions operations are correctly parented under sql_query traces, improving observability for AI-powered queries.

Git Data Connector (Alpha)​

Version-Controlled Data Access: The new Git Data Connector (Alpha) enables querying datasets stored in Git repositories. This connector is ideal for use cases involving configuration files, documentation, or any data tracked in version control.

Example Spicepod.yml configuration:

datasets:
- from: git:https://github.com/myorg/myrepo
name: git_metrics
params:
file_format: csv

For more details, refer to the Git Data Connector Documentation.

Spice Java SDK 0.4.0​

The Spice Java SDK have been upgraded with support configurable Arrow memory limit: spice-java v0.4.0

SpiceClient client = SpiceClient.builder()
.withArrowMemoryLimitMB(1024) // 1GB limit
.build();

CLI Improvements​

Install Specific Versions: The spice install command now supports installing specific versions of the Spice runtime and CLI. This enables easy version management, downgrading, or installation of specific releases for testing or compatibility requirements.

Usage:

# Install a specific version
spice install v1.8.3

# Install a specific version with AI flavor
spice install v1.8.3 ai

# Install latest version (existing behavior)
spice install
spice install ai

Note: Homebrew installations require manual version management via brew install spiceai/spiceai/spice@<version>.

Persistent Query History: The Spice CLI REPL (SQL, search, and chat interfaces) now persists command history to ~/.spice/query_history.txt, making your query history available across sessions. The history file is automatically created if it doesn't exist, with graceful fallback if the home directory cannot be determined.

New REPL Commands:

  • .clear - Clear the screen using ANSI escape codes for a clean workspace
  • .clear history - Clear and persist the query history, removing all stored commands

Tab Completion: Tab completion now includes suggestions based on your command history, making it faster to re-run or modify previous queries.

Example usage:

sql> SELECT * FROM my_table;
sql> .clear # Clears the screen
sql> .clear history # Clears command history
sql> # Use arrow keys or tab to access previous commands

Additional Improvements & Bug Fixes​

  • Reliability: Fixed refresh worker panics with recovery handling to prevent runtime crashes during acceleration refreshes.
  • Reliability: Improved error messages for missing or invalid spicepod.yaml files, providing actionable feedback for misconfiguration.
  • Reliability: Fixed DuckDB metadata pointer loading issues for snapshots.
  • Performance: Ensured ListingTable partitions are pruned correctly when filters are not used.
  • Reliability: Fixed vector dimension determination for partitioned indexes.
  • Search: Fixed casing issues in Reciprocal Rank Fusion (RRF) for hybrid search queries.
  • Search: Fixed search field handling as metadata for chunked search indexes.
  • Validation: Added timestamp support for partition expressions.
  • Validation: Fixed regexp_match function for DuckDB datasets.
  • Validation: Fixed partition name validation for improved reliability.

Contributors​

Breaking Changes​

No breaking changes.

Cookbook Updates​

New HTTP Data Connector Recipe: New recipe demonstrating how to query REST APIs and HTTP(s) endpoints. See HTTP Connector Recipe for details.

The Spice Cookbook includes 82 recipes to help you get started with Spice quickly and easily.

Upgrading​

To upgrade to v1.9.0-rc.2, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.9.0-rc.2 image:

docker pull spiceai/spiceai:1.9.0-rc.2

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

πŸŽ‰ Spice is now available in the AWS Marketplace!

What's Changed​

Dependencies​

Changelog​

Spice v1.7.0 (Sep 23, 2025)

Β· 21 min read
Sergei Grebnov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.7.0! ⚑

Spice v1.7.0 upgrades to DataFusion v49 for improved performance and query optimization, introduces real-time full-text search indexing for CDC streams, EmbeddingGemma support for high-quality embeddings, new search table functions powering the /v1/search API, embedding request caching for faster and cost-efficient search and indexing, and OpenAI Responses API tool calls with streaming. This release also includes numerous bug fixes across CDC streams, vector search, the Kafka Data Connector, and error reporting.

What's New in v1.7.0​

DataFusion v49 Highlights​

DataFusion Clickbench Performance Graph Source: DataFusion 49.0.0 Release Blog.

Performance Improvements πŸš€

  • Equivalence System Upgrade: Faster planning for queries with many columns, enabling more sophisticated sort-based optimizations.
  • Dynamic Filters & TopK Pushdown: Queries with ORDER BY and LIMIT now use dynamic filters and physical filter pushdown, skipping unnecessary data reads for much faster top-k queries.
  • Compressed Spill Files: Intermediate files written during sort/group spill to disk are now compressed, reducing disk usage and improving performance.
  • WITHIN GROUP for Ordered-Set Aggregates: Support for ordered-set aggregate functions (e.g., percentile_disc) with WITHIN GROUP.
  • REGEXP_INSTR Function: Find regex match positions in strings.

Spice Runtime Highlights​

EmbeddingGemma Support: Spice now supports EmbeddingGemma, Google's state-of-the-art embedding model for text and documents. EmbeddingGemma provides high-quality, efficient embeddings for semantic search, retrieval, and recommendation tasks. You can use EmbeddingGemma via HuggingFace in your Spicepod configuration:

Example spicepod.yml snippet:

embeddings:
- from: huggingface:huggingface.co/google/embeddinggemma-300m
name: embeddinggemma
params:
hf_token: ${secrets:HUGGINGFACE_TOKEN}

Learn more about EmbeddingGemma in the official documentation.

POST /v1/search API Use Search Table Functions: The /v1/search API now uses the new text_search and vector_search Table Functions for improved performance.

Embedding Request Caching: The runtime now supports caching embedding requests, reducing latency and cost for repeated content and search requests.

Example spicepod.yml snippet:

runtime:
caching:
embeddings:
enabled: true
max_size: 128mb
item_ttl: 5s

See the Caching documentation for details.

Real-Time Indexing for Full Text Search: Full Text search indexing is now supported for connectors that enable real-time changes, such as Debezium CDC streams. Adding a full-text index on a column with refresh_mode: changes works as it does for full/append-mode refreshes, enabling instant search on new data.

Example spicepod.yml snippet:

datasets:
- from: debezium:cdc.public.question
name: questions
acceleration:
enabled: true
engine: duckdb
primary_key: id
refresh_mode: changes # Use 'changes'
params: *kafka_params
columns:
- name: title
full_text_search:
enabled: true # Enable full-text-search indexing
row_id:
- id

OpenAI Responses API Tool Calls with Streaming: The OpenAI Responses API now supports tool calls with streaming, enabling advanced model interactions such as web_search and code_interpreter with real-time response streaming. This allows you to invoke OpenAI-hosted tools and receive results as they are generated.

Learn more in the OpenAI Model Provider documentation.

Runtime Output Level Configuration: You can now set the output_level parameter in the Spicepod runtime configuration to control logging verbosity in addition to the existing CLI and environment variable support. Supported values are info, verbose, and very_verbose. The value is applied in the following priority: CLI, environment variables, then YAML configuration.

Example spicepod.yml snippet:

runtime:
output_level: info # or verbose, very_verbose

For more details on configuring output level, see the Troubleshooting documentation.

Bug Fixes​

Several bugs and issues have been resolved in this release, including:

  • CDC Streams: Fixed issues where refresh_mode: changes could prevent the Spice runtime from becoming Ready, and improved support for full-text indexing on CDC streams.
  • Vector Search: Fixed bugs where vector search HTTP pipeline could not find more than one IndexedTableProvider, and resolved errors with field mismatches in vector_search UDTF.
  • Kafka Integration: Improved Kafka schema inference with configurable sample size, improved consumer group persistence for SQLite and Postgres accelerations, and added cooperative mode support.
  • Perplexity Web Search: Fixed bug where Perplexity web search sometimes used incorrect query schema (limit).
  • Databricks: Fixed issue with unparsing embedded columns.
  • Error Reporting: ThrottlingException is now reported correctly instead of as InternalError.
  • Iceberg Data Connector: Added support for LIMIT pushdown.
  • Amazon S3 Vectors: Fixed ingestion issues with zero-vectors and improved handling when vector index is full.
  • Tracing: Fixed vector search tracing to correctly report SQL status.

Contributors​

New Contributors​

Breaking Changes​

No breaking changes.

Cookbook Updates​

The Spice Cookbook includes 78 recipes to help you get started with Spice quickly and easily.

Upgrading​

To upgrade to v1.7.0, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.7.0 image:

docker pull spiceai/spiceai:1.7.0

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

πŸŽ‰ Spice is now available in the AWS Marketplace!

What's Changed​

Dependencies​

Changelog​

Spice v1.6.0 (Aug 26, 2025)

Β· 22 min read
Sergei Grebnov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.6.0! πŸ”₯

Spice 1.6.0 upgrades DataFusion to v48, reducing expressions memory footprint by ~50% for faster planning and lower memory usage, eliminating unnecessary projections in queries, optimizing string functions like ascii and character_length for up to 3x speedup, and accelerating unbounded aggregate window functions by 5.6x. The release adds Kafka and MongoDB connectors for real-time streaming and NoSQL data acceleration, supports OpenAI Responses API for advanced model interactions including OpenAI-hosted tools like web_search and code_interpreter, improves the OpenAI Embeddings Connector with usage tier configuration for higher throughput via increased concurrent requests, introduces Model2Vec embeddings for ultra-low-latency encoding, and improves the Amazon S3 Vectors engine to support multi-column primary keys.

What's New in v1.6.0​

DataFusion v48 Highlights​

Spice.ai is built on the DataFusion query engine. The v48 release brings:

Performance & Size Improvements πŸš€: Expressions memory footprint was reduced by ~50% resulting in faster planning and lower memory usage, with planning times improved by 10-20%. There are now fewer unnecessary projections in queries. The string functions, ascii and character_length were optimized for improved performance, with character_length achieving up to 3x speedup. Queries with unbounded aggregate window functions have improved performance by 5.6 times via avoided unnecessary computation for constant results across partitions. The Expr struct size was reduced from 272 to 144 bytes.

New Features & Enhancements ✨: Support was added for ORDER BY ALL for easy ordering of all columns in a query.

See the Apache DataFusion 48.0.0 Blog for details.

Runtime Highlights​

Amazon S3 Vectors Multi-Column Primary Keys: The Amazon S3 Vectors engine now supports datasets with multi-column primary keys. This enables vector indexes for datasets where more than one column forms the primary key, such as those splitting documents into chunks for retrieval contexts. For multi-column keys, Spice serializes the keys using arrow-json format, storing them as single string keys in the vector index.

Model2Vec Embeddings: Spice now supports model2vec static embeddings with a new model2vec embeddings provider, for sentence transformers up to 500x faster and 15x smaller, enabling scenarios requiring low latency and high-throughput encoding.

embeddings:
- from: model2vec:minishlab/potion-base-8M # HuggingFace model
name: potion
- from: model2vec:path/to/my/local/model # local model
name: local

Learn more in the Model2Dev Embeddings documentation.

Kafka Data Connector: Use from: kafka:<topic> to ingest data directly from Kafka topics for integration with existing Kafka-based event streaming infrastructure, providing real-time data acceleration and query without additional middleware.

Example Spicepod.yml:

- from: kafka:orders_events
name: orders
acceleration:
enabled: true
refresh_mode: append
params:
kafka_bootstrap_servers: server:9092

Learn more in the Kafka Data Connector documentation.

MongoDB Data Connector: Use from: mongodb:<dataset> to access and accelerate data stored in MongoDB, deployed on-premises or in the cloud.

Example spicepod.yml:

datasets:
- from: mongodb:my_dataset
name: my_dataset
params:
mongodb_host: localhost
mongodb_db: my_database
mongodb_user: my_user
mongodb_pass: password

Learn more in the MongoDB Data Connector documentation.

OpenAI Responses API Support: The OpenAI Responses API (/v1/responses) is now supported, which is OpenAI's most advanced interface for generating model responses.

To enable the /v1/responses HTTP endpoint, set the responses_api parameter to enabled:

Example spicepod.yml:

models:
- name: openai_model_using_responses_api
from: openai:gpt-4.1
params:
openai_api_key: ${ secrets:OPENAI_API_KEY }
responses_api: enabled # Enable the /v1/responses endpoint for this model

Example curl request:

curl http://localhost:8090/v1/responses \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"input": "Tell me a three sentence bedtime story about Spice AI."
}'

To use responses in spice chat, use the --responses flag.

Example:

spice chat --responses # Use the `/v1/responses` endpoint for all completions instead of `/v1/chat/completions`

Use OpenAI-hosted tools supported by Open AI's Responses API by specifying the openai_responses_tools parameter:

Example spicepod.yml:

models:
- name: test
from: openai:gpt-4.1
params:
openai_api_key: ${ secrets:SPICE_OPENAI_API_KEY }
tools: sql, list_datasets
responses_api: enabled
openai_responses_tools: web_search, code_interpreter # 'code_interpreter' or 'web_search'

These OpenAI-specific tools are only available from the /v1/responses endpoint. Any other tools specified via the tools parameter are available from both the /v1/chat/completions and /v1/responses endpoints.

Learn more in the OpenAI Model Provider documentation.

OpenAI Embeddings & Models Connectors Usage Tier: The OpenAI Embeddings and Models Connectors now supports specifying account usage tier for embeddings and model requests, improving the performance of generating text embeddings or calling models during dataset load and search by increasing concurrent requests.

Example spicepod.yml:

embeddings:
- from: openai:text-embedding-3-small
name: openai_embed
params:
openai_usage_tier: tier1

By setting the usage tier to the matching usage tier for your OpenAI account, the Embeddings and Models Connector will increase the maximum number of concurrent requests to match the specified tier.

Learn more in the OpenAI Model Provider documentation.

Contributors​

New Contributors​

Breaking Changes​

No breaking changes.

Cookbook Updates​

The Spice Cookbook includes 77 recipes to help you get started with Spice quickly and easily.

Upgrading​

To upgrade to v1.6.0, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.6.0 image:

docker pull spiceai/spiceai:1.6.0

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

πŸŽ‰ Spice is also now available in the AWS Marketplace!

What's Changed​

Dependencies​

Changelog​

  • Support Streaming with Tool Calls (#6941) by @Advayp in #6941
  • Fix parameterized query planning in DataFusion (#6942) by @Jeadie in #6942
  • Update the UnableToLoadCredentials error with a pointer to docs (#6937) by @phillipleblanc in #6937
  • Fix spicecloud benchmark (#6935) by @krinart in #6935
  • [Debezium] Support for VariableScaleDecimal (#6934) by @krinart in #6934
  • Update to DF 48 (#6665) by @mach-kernel and @kczimm in #6665
  • Mark append-stream and CDC datasets as ready after first message (#6914) by @sgrebnov in #6914
  • Model2Vec embedding model support (#6846) by @mach-kernel in #6846
  • Update snapshot for S3 vector search test (#6920) by @Jeadie in #6920
  • remove [] from queryset in spicepod path for CI (#6919) by @Jeadie in #6919
  • Remove verbose tracing (#6915) by @Jeadie in #6915
  • Refactor how models supporting the Responses API are loaded (#6912) by @Advayp in #6912
  • Write tests for truncate formatting in arrow_tools and fix bug. (#6900) by @Jeadie in #6900
  • Support using the Responses API from spice chat (#6894) by @Advayp in #6894
  • Include GPT-5 into Text-To-SQL and Financebench benchmarks (#6907) by @sgrebnov in #6907
  • Better error message when credentials aren't loaded for S3 Vectors (#6910) by @phillipleblanc in #6910
  • Add tracing and system prompt support for the Responses API (#6893) by @Advayp in #6893
  • Constraint violation check is improved to control behavior when violations occur within a batch (#6897) by @phillipleblanc in #6897
  • fix: Multi-column text search with v1/search (#6905) by @peasee in #6905
  • fix: Correctly project text search primary keys to underlying projection (#6904) by @peasee in #6904
  • fix: Update benchmark snapshots (#6901) by @app/github-actions in #6901
  • In S3vector, do not pushdown on non-filterable columns (#6884) by @Jeadie in #6884
  • Run E2E Test CI macOS build on bigger runners (#6896) by @phillipleblanc in #6896
  • Enable configuration of the Responses API for the Azure model provider (#6891) by @Advayp in #6891
  • fix: Update benchmark snapshots (#6888) by @app/github-actions in #6888
  • Update OpenAPI specification for /v1/responses (#6889) by @Advayp in #6889
  • Add test to ensure tools are injected correctly in the Responses API (#6886) by @Advayp in #6886
  • Enable embeddings for append streams (#6878) by @sgrebnov in #6878
  • Show correct limit for EXPLAIN plans in S3VectorsQueryExec (#6852) by @Jeadie in #6852
  • Responses API support for Azure Open AI (#6879) by @Advayp in #6879
  • fix: Update search test case structure (#6865) by @peasee in #6865
  • Fix mongodb benchmark (#6883) by @phillipleblanc in #6883
  • Support multiple column primary keys for S3 vectors. (#6775) by @Jeadie in #6775
  • Kafka Data Connector: persist consumer between restarts (#6870) by @sgrebnov in #6870
  • Fix newlines in errors added in recent PRs (#6877) by @phillipleblanc in #6877
  • Add override parameter to force support for the Responses API (#6871) by @Advayp in #6871
  • Don't use metadata columns in VectorScanTableProvider (#6854) by @Jeadie in #6854
  • Add non-streaming tool call support (hosted and Spice tools) via the Responses API (#6869) by @Advayp in #6869
  • Update error guideline to remove newlines + remove newlines from error messages. (#6866) by @phillipleblanc in #6866
  • Remove void acceleration engine + optional table behaviors (#6868) by @phillipleblanc in #6868
  • Kafka Data Connector basic support (#6856) by @sgrebnov in #6856
  • Federated+Accelerated TPCH Benchmarks for MongoDB (#6788) by @krinart in #6788
  • Pass embeddings calculated in compute_index to the acceleration (#6792) by @phillipleblanc in #6792
  • Add non-streaming and streaming support for OpenAI Responses API endpoint (#6830) by @Advayp in #6830
  • Use latest version of OpenAI crate to resolve issues with Service Tier deserialization (#6853) by @Advayp in #6853
  • Update openapi.json (#6799) by @app/github-actions in #6799
  • Improve management message (#6850) by @lukekim in #6850
  • fix: Include FTS search column if it is the PK (#6836) by @peasee in #6836
  • Refactor Health Checks (#6848) by @Advayp in #6848
  • Introduce a Responses trait and LLM registry for model providers that support the OpenAI Responses API (#6798) by @Advayp in #6798
  • fix: Update datafusion-table-providers to include constraints (#6837) by @peasee in #6837
  • Bump postcard from 1.1.2 to 1.1.3 (#6841) by @app/dependabot in #6841
  • Bump governor from 0.10.0 to 0.10.1 (#6835) by @app/dependabot in #6835
  • Bump ctor from 0.2.9 to 0.5.0 (#6827) by @app/dependabot in #6827
  • Bump azure_core from 0.26.0 to 0.27.0 (#6826) by @app/dependabot in #6826
  • Bump rstest from 0.25.0 to 0.26.1 (#6825) by @app/dependabot in #6825
  • Use latest commit in our fork of async-openai (#6829) by @Advayp in #6829
  • Bump rustls from 0.23.27 to 0.23.31 (#6824) by @app/dependabot in #6824
  • Bump async-trait from 0.1.88 to 0.1.89 (#6823) by @app/dependabot in #6823
  • Bump hyper from 1.6.0 to 1.7.0 (#6814) by @app/dependabot in #6814
  • Bump serde_json from 1.0.140 to 1.0.142 (#6812) by @app/dependabot in #6812
  • Add s3 vector test retrieving vectors (#6786) by @Jeadie in #6786
  • fix: Allow v1/search with only FTS (#6811) by @peasee in #6811
  • Bump tantivy from 0.24.1 to 0.24.2 (#6806) by @app/dependabot in #6806
  • Bump tokio-util from 0.7.15 to 0.7.16 (#6810) by @app/dependabot in #6810
  • fix: Improve FTS index primary key handling (#6809) by @peasee in #6809
  • Bump logos from 0.15.0 to 0.15.1 (#6808) by @app/dependabot in #6808
  • Bump hf-hub from 0.4.2 to 0.4.3 (#6807) by @app/dependabot in #6807
  • Bump odbc-api from 13.0.1 to 13.1.0 (#6803) by @app/dependabot in #6803
  • fix: Spice search CLI with FTS supports string or slice unmarshalling (#6805) by @peasee in #6805
  • Bump uuid from 1.17.0 to 1.18.0 (#6797) by @app/dependabot in #6797
  • Bump reqwest from 0.12.22 to 0.12.23 (#6796) by @app/dependabot in #6796
  • Bump anyhow from 1.0.98 to 1.0.99 (#6795) by @app/dependabot in #6795
  • Bump clap from 4.5.41 to 4.5.45 (#6794) by @app/dependabot in #6794
  • Respect default MAX_DECODING_MESSAGE_SIZE (100MB) in Flight API (#6802) by @sgrebnov in #6802
  • Fix compilation errors caused by upgrading async-openai (#6793) by @Advayp in #6793
  • Remove outdated vector search benchmark (replaced with testoperator) (#6791) by @sgrebnov in #6791
  • Handle errors in vector ingestion pipeline (#6782) by @phillipleblanc in #6782
  • fix: Explicitly error when chunking is defined for vector engines (#6787) by @peasee in #6787
  • Make VectorScanTableProvider and VectorQueryTableProvider support multi-column primary keys (#6757) by @Jeadie in #6757
  • Use megascience/megascience Q+A dataset for text search testing. (#6702) by @Jeadie in #6702
  • Flight REPL autocomplete (#6589) by @krinart in #6589
  • use ref: github.event.pull_request.head.sha in integration_models.yml (#6780) by @Jeadie in #6780
  • fix: Move search telemetry calls in UDTF to scan (#6778) by @peasee in #6778
  • Fix Hugging Face models and embeddings loading in Docker (#6777) by @ewgenius in #6777
  • feat: Migrate bedrock rate limiter (#6773) by @peasee in #6773
  • Run the PR checks on the DEV runners (#6769) by @phillipleblanc in #6769
  • feat: add OpenAI models rate controller (#6767) by @peasee in #6767
  • Implement MongoDB data connector (#6594) by @krinart in #6594
  • fix: Use head ref for concurrency group (#6770) by @peasee in #6770
  • fix: Run enforce pulls with spice on pull_request_target (#6768) by @peasee in #6768
  • feat: Add OpenAI Embeddings Rate Controller (#6764) by @peasee in #6764
  • Move AWS SDK credential bridge integration test to the existing AWS SDK integration test run (#6766) by @phillipleblanc in #6766
  • Use Spice specific errors instead of OpenAIError in embedding module (#6748) by @kczimm in #6748
  • Use context in Glue Catalog Provider (#6763) by @Advayp in #6763
  • pin cargo-deny to previous version (#6762) by @kczimm in #6762
  • Bump actions/download-artifact from 4 to 5 (#6720) by @app/dependabot in #6720
  • Upgrade dependabot dependencies (#6754) by @phillipleblanc in #6754
  • Set E2E Test CI models build to 90 minute timeout (#6756) by @phillipleblanc in #6756
  • chore: upgrade to Rust 1.87.0 (#6614) by @kczimm in #6614
  • feat: Add initial runtime-rate-limiter crate (#6753) by @peasee in #6753
  • feat: Add more embedding traces, add MiniLM MTEB spicepod (#6742) by @peasee in #6742
  • Update QA analytics for release (#6740) by @Advayp in #6740
  • Always use 'returnData: true' for s3 vector query index (#6741) by @Jeadie in #6741
  • feat: Add Embedding and Search anonymous telemetry (#6737) by @peasee in #6737
  • Add 1.5.2 to SECURITY.md (#6739) by @ewgenius in #6739
  • Combine the Iceberg and Object Store AWS SDK bridges into one crate (#6718) by @Advayp in #6718
  • Updates to v1.5.2 release notes (#6736) by @lukekim in #6736
  • Update end game template - move glue catalog to catalogs section (#6732) by @ewgenius in #6732
  • Update v1.5.2.md (#6735) by @kczimm in #6735
  • Add note about S3 Vectors workaround (#6734) by @phillipleblanc in #6734
  • feat: Avoid joining for VectorScanTableProvider if the index is sufficient (#6714) by @peasee in #6714
  • update changelog (#6729) by @kczimm in #6729
  • remove unneeded autogenerated s3 vector code (#6715) by @Jeadie in #6715
  • fix: Set S3 vectors default limit to 30, add more tracing (#6712) by @peasee in #6712
  • docs: Add Hadoop cookbook to endgame template (#6708) by @peasee in #6708
  • Fix testoperator append mode compilation error (#6706) by @phillipleblanc in #6706
  • test: Add VectorScanTableProvider snapshot tests (#6701) by @peasee in #6701
  • feat: Add Hadoop catalog-mode benchmark (#6684) by @peasee in #6684
  • Move shared AWS crates used in bridges to workspace (#6705) by @Advayp in #6705
  • Use installation id to group connections (#6703) by @Advayp in #6703
  • Add Guardrails for AWS bedrock models (#6692) by @Jeadie in #6692
  • Update bedrock keys for CI. (#6693) by @Jeadie in #6693
  • Update acknowledgements (#6690) by @app/github-actions in #6690
  • ROADMAP updates Aug 1, 2025 (#6667) by @lukekim in #6667
  • Add retry logic for OpenAI embeddings creation (#6656) by @sgrebnov in #6656
  • Make models E2E chat test more robust (#6657) by @sgrebnov in #6657
  • Update Search GH Workflow to use Test Operator (#6650) by @sgrebnov in #6650
  • Score and P95 latency calculation for MTEB Quora-based vector search tests in Test Operator (#6640) by @sgrebnov in #6640
  • Fix multiple query error being classified as an internal error (#6635) by @Advayp in #6635
  • Add Support for S3 Table Buckets (#6573) by krinart in #6573
  • set MISTRALRS_METAL_PRECOMPILE=0 for metal (#6652) by @kczimm in #6652
  • Vector search to push down udtf limit argument into logical sort plan (#6636) by @mach-kernel in #6636
  • docs: Update qa_analytics.csv (#6643) by @peasee in #6643
  • Update SECURITY.md (#6642) by @Jeadie in #6642
  • docs: Update qa_analytics.csv (#6641) by @peasee in #6641
  • Separate token usage (#6619) by @Advayp in #6619
  • Fix typo in release notes (#6634) by @Advayp in #6634
  • Add environment variable for org token (#6633) by @Advayp in #6633
  • CDC: Compute embeddings on ingest (#6612) by @mach-kernel in #6612
  • Add view name to view creation errors (#6611) by @lukekim in #6611
  • Add core logic for running MTEB Quora-based vector search tests in Test Operator (#6607) by @sgrebnov in #6607
  • Revert "Update generate-openapi.yml (#6584)" (#6620) by @Jeadie in #6620
  • Non-accelerated views should report as ready only after all dependent datasets are ready (#6617) by @sgrebnov in #6617

Spice v1.2.1 (May 6, 2025)

Β· 6 min read
Sergei Grebnov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.2.1! πŸ”₯

Spice v1.2.1 includes several data connector fixes and improves query performance for accelerated views. This release also introduces Databricks Service Principal (M2M OAuth) authentication and expands parameterized queries.

Highlights in v1.2.1​

  • Databricks Service Principal Support: Databricks datasets and catalogs now support Machine-to-Machine (M2M) OAuth authentication via Service Principals, enabling secure machine connections to Databricks.

    Example spicepod.yaml:

    datasets:
    - from: databricks:spiceai.datasets.my_awesome_table # A reference to a table in the Databricks unity catalog
    name: my_delta_lake_table
    params:
    mode: delta_lake
    databricks_endpoint: dbc-a1b2345c-d6e7.cloud.databricks.com
    databricks_client_id: ${secrets:DATABRICKS_CLIENT_ID}
    databricks_client_secret: ${secrets:DATABRICKS_CLIENT_SECRET}

    For details, see documentation for:

  • Iceberg Data Connector: Now supports cross-account table access via the AWS Glue Catalog Connector and fixes an issue when querying data from append mode datasets.

  • Iceberg Catalog API: Full compatibility with the Iceberg HTTP REST Catalog API to consume Spice datasets from Iceberg Catalog clients.

    For details, see documentation for:

  • Improved Parameterized Query Support: Expanded type inference for placeholders in:

    • IN list expressions
    • LIKE patterns
    • SIMILAR TO patterns
    • LIMIT clauses
    • Subqueries

New Contributors πŸŽ‰β€‹

Contributors​

Breaking Changes​

No breaking changes.

Cookbook Updates​

New recipes for:

The Spice Cookbook now includes 68 recipes to help you get started with Spice quickly and easily.

Upgrading​

To upgrade to v1.2.1, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.2.1 image:

docker pull spiceai/spiceai:1.2.1

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

What's Changed​

Dependencies​

  • No major dependency changes.

Changelog​

  • Fix: Specify metric type as a dimension for testoperator by @peasee in #5630
  • Fix: Add option to run dispatch schedule by @peasee in #5631
  • Infer placeholder datatype for InList, Like, and SimilarTo by @kczimm in #5626
  • Add QA analytics for 1.2.0 by @phillipleblanc in #5640
  • Fix: Use SPICED_COMMIT for spiced_commit_sha by @peasee in #5632
  • New crates/tools by @Jeadie in #5121
  • Update openapi.json by @github-actions in #5643
  • Enable metrics reporting for models benchmarks (evals) by @sgrebnov in #5639
  • Implement CatalogBuilder, add app and runtime references to catalog component, add runtime reference to connector params by @ewgenius in #5641
  • Fix eventing bug in LLM progress; Add tool and worker progress by @Jeadie in #5619
  • Handle small precision differences in TPCH answer validation by @phillipleblanc in #5642
  • Add TokenProviderRegistry to the runtime by @ewgenius in #5651
  • Provide ModelContextLayer for evals by @Jeadie in #5648
  • Databricks data_components refactor. Databricks Spark connect - add set_token method and writable spark session by @ewgenius in #5654
  • Extract AWS Glue warehouse for cross-account Iceberg tables by @phillipleblanc in #5656
  • Refactor Dataset component by @phillipleblanc in #5660
  • Fix Iceberg API returning 404 when schema contains a Dictionary by @phillipleblanc in #5665
  • Fix dependencies: downgrade swagger-ui to v8; force zip to 2.3.0 by @kczimm in #5664
  • Add DuckDB indexes spicepod, additional dispatches by @peasee in #5633
  • Update readme: update data federation link by @nuvic in #5673
  • Support metadata columns for object-store based data connectors by @phillipleblanc in #5661
  • Add model name to LLM judges, and add model_graded_scoring task by @Jeadie in #5655
  • Add SF1000 TPCH test spicepods for delta lake by @Sevenannn in #5606
  • Validate Github Connector resource existence before building the github connector graphql table by @Sevenannn in #5674
  • Remove hard-coded embedding performance tests in CI by @Sevenannn in #5675
  • Databricks M2M auth for spark connect data connector by @ewgenius in #5659
  • Enable federated data refresh support for accelerated views by @sgrebnov in #5677
  • Add pods watcher integration test by @Sevenannn in #5681
  • Add m2m support for databricks delta connector by @ewgenius in #5680
  • Update end_game.md by @sgrebnov in #5684
  • Update StaticTokenProvider to use SecretString instead of raw str value by @ewgenius in #5686
  • Add M2M Auth support for Databricks catalog connector by @ewgenius in #5687
  • Update UX to disable acceleration federation by @sgrebnov in #5682
  • Improve placeholder inference (LIMIT & Expr::InSubquery) by @phillipleblanc in #5692
  • Tweak default log to ignore aws_config::imds::region by @phillipleblanc in #5693
  • Make Spice properly Iceberg Catalog API compatible for load table API by @phillipleblanc in #5695
  • Use deterministic queries for Databricks m2m catalog tests by @ewgenius in #5696
  • Support retrieving the latest Iceberg table on table scan by @phillipleblanc in #5704

Full Changelog: v1.2.0...v1.2.1

Spice v1.0.6 (Mar 17, 2025)

Β· 4 min read
Sergei Grebnov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.0.6 ⚑

Spice v1.0.6 improves stability for DuckDB acceleration, Iceberg Data/Catalog connector improvements when using AWS Glue, and fixes an issue with the ready_state: on_registration federation fallback when using DuckDB. In addition, redundant data refreshes on startup are avoided for accelerations with persistent data.

Highlights in v1.0.6​

  • Iceberg Data/Catalog Connector Improvements: Improves Iceberg data & catalog connector reliability, including bug fixes for AWS Glue API rate-limiting and compatibility, REST API pagination support, explicit AWS credential handling, and support for AWS STS role assumption.

  • Fixes On-Registration Fallback when using DuckDB: Previously, when using DuckDB as a data accelerator and the ready_state: on_registration configuration, queries made during the initial data refresh did not properly fallback to the federated source. This is now fixed.

  • DuckDB downgraded for Stability: DuckDB has been downgraded to v1.1.3 due to a regression in memory handling tracked by duckdb/duckdb issue #16640. Once resolved and validated, Spice will re-upgrade to v1.2.x.

  • Expanded Integration Tests: Additional integration tests covering federated accelerator behavior and graceful shutdown processes have been added.

  • Optimized Data Refresh for Persistent Accelerations: Changed behavior in v1.0.6. When using persistent (file-mode) acceleration without a defined refresh interval, Spice performs a full refresh at startup only if no previously accelerated data is available. This ensures efficient startup behavior by avoiding unnecessary refreshes. This logic applies only to full refreshes when no refresh interval is specified.

To maintain the previous behavior and always refresh on every startup, set:

acceleration:
refresh_on_startup: always

Contributors​

  • @peasee
  • @phillipleblanc
  • @sgrebnov
  • @lukekim
  • @Sevenannn

Breaking Changes​

Starting from v1.0.6 when using persistent (file-mode) acceleration without a defined refresh interval, Spice performs a full refresh at startup only if no previously accelerated data is available. To maintain the previous behavior and always refresh on every startup, set:

acceleration:
refresh_on_startup: always

Cookbook Updates​

No new recipes.

Upgrading​

To upgrade to v1.0.6, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.0.6 image:

docker pull spiceai/spiceai:1.0.6

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

What's Changed​

Dependencies​

Changelog​

  • Implement proper ready_state: on_registration for federation enabled accelerators by @phillipleblanc in #5019
  • Add indexes and primary keys mismatch detection for DuckDB Acceleration by @sgrebnov in #5045
  • Add comprehensive integration tests for the ready_state behavior by @phillipleblanc in #5042
  • Add test Spicepod for acceleration with constraints by @sgrebnov in #4891
  • Add test Spicepod for DuckDB append acceleration with constraints by @sgrebnov in #4898
  • Add DuckDB graceful shutdown test to E2E CI tests by @sgrebnov in #5047
  • Update duckdb_append_with_pk_and_indexes.yaml (work for duckdb 1.1.x) by @sgrebnov in #5067
  • fix: Downgrade to DuckDB 1.1.3 by @peasee in #5055
  • fix: Acceleration federation integration test by @peasee in #5070
  • Improvements to Iceberg Catalog/Data Connector by @phillipleblanc in #5071
  • Add Results-Cache-Status to indicate query result came from cache by @phillipleblanc in #4809
  • fix: Spice.ai schema inference by @peasee in #4674
  • Add refresh_on_startup Spicepod configuration param by @phillipleblanc and @sgrebnov in #5086
  • Test restart behavior of DuckDB file acceleration against glue iceberg table by @Sevenannn #5075
  • Run Iceberg Data Connector - DuckDB File mode integration test by @Sevenannn #5069
  • Integration test for glue iceberg catalog by @Sevenannn #5077

Full Changelog: https://github.com/spiceai/spiceai/compare/v1.0.5...v1.0.6