Skip to main content

5 posts tagged with "acceleration"

Topics related to acceleration techniques and tools

View All Tags

Spice v1.9.0-rc.1 (Nov 4, 2025)

Β· 16 min read
William Croxson
Senior Software Engineer at Spice AI

This is the first release candidate for v1.9.0, which introduces Cayenne, a new high-performance data accelerator built on the Vortex columnar format that delivers DuckDB-comparable performance without scaling limitations. This release also upgrades to DataFusion v50 for improved query performance, expands search capabilities with full-text search on views and multi-column embeddings, includes significant DynamoDB and DuckDB accelerator improvements, and delivers security and reliability enhancements.

What's New in v1.9.0-rc.1​

Cayenne Data Accelerator (Alpha)​

Introducing Cayenne: SQL as an Acceleration Format: A new high-performance data accelerator that simplifies multi-file data acceleration by using an embedded database (SQLite) for metadata while storing data in the Vortex columnar format. Cayenne delivers query and ingestion performance comparable or better to DuckDB's file-based acceleration without DuckDB's memory overhead and the scaling challenges of single DuckDB files.

Cayenne uses SQLite to manage acceleration metadata (schemas, snapshots, statistics, file tracking) through simple SQL transactions, while storing actual data in Vortex's compressed columnar format. This architecture provides:

Key Features:

  • SQLite + Vortex Architecture: All metadata is stored in SQLite tables with standard SQL transactions, while data lives in Vortex's compressed, chunked columnar format designed for zero-copy access and efficient scanning.
  • Simplified Operations: No complex file hierarchies, no JSON/Avro metadata files, no separate catalog serversβ€”just SQL tables and Vortex data files. The entire metadata schema is intentionally simple for maximum reliability.
  • Fast Metadata Access: Single SQL query retrieves all metadata needed for query planningβ€”no multiple round trips to storage, no S3 throttling, no reconstruction of metadata state from scattered files.
  • Efficient Small Changes: Dramatically reduces small file proliferation. Snapshots are just rows in SQLite tables, not new files on disk. Supports millions of snapshots without performance degradation.
  • High Concurrency: Changes consist of two steps: stage Vortex files (if any), then run a single SQL transaction. Much faster conflict resolution and support for many more concurrent updates than file-based formats.
  • Advanced Data Lifecycle: Full ACID transactions, delete support, and retention SQL execution on refresh commit.

Example Spicepod.yml configuration:

datasets:
- from: s3:my_table
name: accelerated_data
acceleration:
enabled: true
engine: cayenne
retention:
sql: DELETE FROM accelerated_data WHERE created_at < NOW() - INTERVAL '30 days'

Note, the Cayenne Data Accelerator is in Alpha with limitations.

For more details, refer to the Cayenne Documentation, the Vortex project, and the DuckLake announcement that partly inspired this design.

DataFusion v50 Upgrade​

Spice.ai is built on the DataFusion query engine. The v50 release brings significant performance improvements and enhanced reliability:

Performance Improvements πŸš€:

  • Dynamic Filter Pushdown: Enhanced dynamic filter pushdown for custom ExecutionPlans, ensuring filters propagate correctly through all physical operators for improved query performance.
  • Partition Pruning: Expanded partition pruning support ensures that unnecessary partitions are skipped when filters are not used, reducing data scanning overhead and improving query execution times.

Bug Fixes & Reliability: Resolved issues with partition name validation and empty execution plans when vector index lists are empty. Fixed timestamp support for partition expressions, enabling better partitioning for time-series data.

See the Apache DataFusion 50.0.0 Release for more details.

DynamoDB Data Connector Improvements​

Improved Query Performance: The DynamoDB Data Connector now includes improved filter handling for edge cases, parallel scan support for faster data ingestion, and better error handling for misconfigured queries. These improvements enable more reliable and performant access to DynamoDB data.

Example Spicepod.yml configuration:

datasets:
- from: dynamodb:my_table
name: ddb_data
params:
scan_segments: 10 # Default `auto` which calculates optimal segments based on number of rows

Search & Embeddings Enhancements​

Full-Text Search on Views: Full-text search indexes are now supported on views, enabling advanced search scenarios over pre-aggregated or transformed data. This extends the power of Spice's search capabilities beyond base datasets.

Multi-Column Embeddings on Views: Views now support embedding columns, enabling vector search and semantic retrieval on view data. This is useful for search over aggregated or joined datasets.

Vector Engines on Views: Vector search engines are now available for views, enabling similarity search over complex queries and transformations.

Example Spicepod.yml configuration:

views:
- name: aggregated_reviews
sql: SELECT review_id, review_text FROM reviews WHERE rating > 4
embeddings:
- column: review_text
model: openai:text-embedding-3-small

DuckDB Accelerator Improvements​

Parquet Buffering for Partitioned Writes: DuckDB partitioned writes in table mode now support Parquet buffering, reducing memory usage and improving write performance for large datasets.

Retention SQL on Refresh Commit: DuckDB accelerations now support running retention SQL on refresh commit, enabling automatic data cleanup and lifecycle management during refresh operations.

UTC Timezone for DuckDB: DuckDB now uses UTC as the default timezone, ensuring consistent behavior for time-based queries across different environments.

Example Spicepod.yml configuration:

datasets:
- from: s3://my_bucket/large_table/
name: partitioned_data
acceleration:
enabled: true
engine: duckdb
mode: file
retention:
sql: DELETE FROM partitioned_data WHERE event_time < NOW() - INTERVAL '7 days'

Query Performance Optimizations​

Optimized Prepared Statements: Prepared statement handling has been optimized for better performance with parameterized queries, reducing planning overhead and improving execution time for repeated queries.

Large RecordBatch Chunking: Large Arrow RecordBatch objects are now automatically chunked to control memory usage during query execution, preventing memory exhaustion for queries returning large result sets.

Security & Reliability Improvements​

Enhanced HTTP Client Security: HTTP client usage across the runtime has been hardened with improved TLS validation, certificate pinning for critical endpoints, and better error handling for network failures.

ODBC Connector Improvements: Removed unwrap calls from the ODBC connector, improving error handling and reliability. Fixed secret handling and Kubernetes secret integration.

CLI Permissions Hardening: Tightened file permissions for the CLI and install script, ensuring secure defaults for configuration files and credentials.

Oracle Instant Client Pinning: Oracle Instant Client downloads are now pinned to specific SHAs, ensuring reproducible builds and preventing supply chain attacks.

Observability & Tracing​

DataFusion Log Emission: The Spice runtime now emits DataFusion internal logs, providing deeper visibility into query planning and execution for debugging and performance analysis.

AI Completions Tracing: Fixed tracing so that ai_completions operations are correctly parented under sql_query traces, improving observability for AI-powered queries.

Git Data Connector (Alpha)​

Version-Controlled Data Access: The new Git Data Connector (Alpha) enables querying datasets stored in Git repositories. This connector is ideal for use cases involving configuration files, documentation, or any data tracked in version control.

Example Spicepod.yml configuration:

datasets:
- from: git:https://github.com/myorg/myrepo
name: git_metrics
params:
file_format: csv

For more details, refer to the Git Data Connector Documentation.

Additional Improvements & Bug Fixes​

  • Reliability: Fixed refresh worker panics with recovery handling to prevent runtime crashes during acceleration refreshes.
  • Reliability: Improved error messages for missing or invalid spicepod.yaml files, providing actionable feedback for misconfiguration.
  • Reliability: Fixed DuckDB metadata pointer loading issues for snapshots.
  • Performance: Ensured ListingTable partitions are pruned correctly when filters are not used.
  • Reliability: Fixed vector dimension determination for partitioned indexes.
  • Search: Fixed casing issues in Reciprocal Rank Fusion (RRF) for hybrid search queries.
  • Search: Fixed search field handling as metadata for chunked search indexes.
  • Validation: Added timestamp support for partition expressions.
  • Validation: Fixed regexp_match function for DuckDB datasets.
  • Validation: Fixed partition name validation for improved reliability.

Contributors​

Breaking Changes​

No breaking changes.

Cookbook Updates​

No major cookbook updates.

The Spice Cookbook includes 81 recipes to help you get started with Spice quickly and easily.

Upgrading​

To upgrade to v1.9.0-rc.1, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.9.0-rc.1 image:

docker pull spiceai/spiceai:1.9.0-rc.1

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

πŸŽ‰ Spice is now available in the AWS Marketplace!

What's Changed​

Changelog​

Spice v1.8.3 (Oct 27, 2025)

Β· 5 min read
David Stancu
Principal Software Engineer at Spice AI

Announcing the release of Spice v1.8.3! ⚑

Spice v1.8.3 is a patch release focused on performance, reliability, and observability. This release delivers optimizations for DuckDB acceleration, parameterized queries, and query plans. A new opt-in dedicated thread pool for queries is now in preview.

What's New in v1.8.3​

DuckDB Data Accelerator Improvements​

  • Connection Pool Sizing: The DuckDB accelerator now supports a configurable connection_pool_size parameter, supporting fine-grained control over concurrent query execution. This enables tuning for high-concurrency workloads and improved resource utilization.

Example Spicepod.yaml snippet:

datasets:
- from: postgres:my_table
name: my_table
acceleration:
enabled: true
engine: duckdb
params:
connection_pool_size: 10
  • Automatic Statistics Recomputation: The new on_refresh_recompute_statistics parameter, on by default, triggers automatic ANALYZE execution after refreshes. This keeps DuckDB optimizer statistics up-to-date, ensuring efficient query plans and optimal performance.

Example Spicepod.yaml snippet:

datasets:
- from: postgres:my_table
name: my_table
acceleration:
enabled: true
engine: duckdb
params:
on_refresh_recompute_statistics: disabled # default enabled

Task History SQL Query Plan Capture & Configuration​

Spice now supports automated SQL query plan capture and store (via EXPLAIN or EXPLAIN ANALYZE) in the task history, enabling deeper analysis and debugging of query execution. This feature is configurable, supporting control of which queries are included based on duration thresholds and plan type.

  • New Configuration Options:
    • task_history.captured_plan: Controls which plan is captured (none, explain, or explain analyze). Default none.
    • task_history.min_sql_duration: Minimum query duration before a plan is captured.
    • task_history.min_plan_duration: Minimum plan execution duration before a plan is captured.

Example spicepod.yaml snippet:

runtime:
task_history:
captured_plan: explain analyze
min_sql_duration: 5s
min_plan_duration: 10s

Query plans are captured asynchronously to avoid blocking query execution. The result of the plan is stored in the standard sql_query output in the task history.

Learn more in the Task History Documentation.

Query Performance Optimizations​

  • Optimized Prepared Statements (Parameterized Queries): Prepared statement caching for parameterized SQL queries has been improved, reducing planning overhead for repeated queries with different parameters. This results in faster execution and lower latency for workloads that reuse query structures.

  • Limit Pushdown via BytesProcessedExec: Introduces the BytesProcessedExec physical operator, enabling limit pushdown for large datasets. This optimization reduces the amount of data processed and improves top-k query performance.

Dedicated Query Thread Pool (Opt-In)​

Spice now supports running query execution and accelerated refreshes on a dedicated thread pool, separate from the HTTP server. This prevents heavy query workloads from slowing down API responses, keeping health and readiness checks fast. Opt-In for v1.8.3: This feature is opt-in for this release and will become enabled by default (opt-out) in v1.9.

Example Spicepod.yaml snippet:

runtime:
params:
dedicated_thread_pool: sql_engine # Default: disabled

Validation & Reliability Improvements​

  • Selective Evaluation Scorer Loading: Evaluation scorers are now loaded only when evaluation is explicitly defined, reducing unnecessary initialization and improving startup performance.

  • Improved Error Reporting: Enhanced error messages for misconfigured full-text search (FTS) on datasets and views, providing actionable feedback for configuration issues.

REPL & Usability​

  • Execution Time Display: The Spice REPL now displays query execution time even when queries return no results, improving user feedback and diagnostics.

Contributors​

Breaking Changes​

No breaking changes.

Cookbook Updates​

No major cookbook updates.

The Spice Cookbook includes 81 recipes to help you get started with Spice quickly and easily.

Upgrading​

To upgrade to v1.8.3, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.8.3 image:

docker pull spiceai/spiceai:1.8.3

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

πŸŽ‰ Spice is now available in the AWS Marketplace!

What's Changed​

Changelog​

Spice v1.2.2 (May 13, 2025)

Β· 5 min read
Jack Eadie
Token Plumber at Spice AI

Announcing the release of Spice v1.2.2! 🌟

Spice v1.2.2 introduces support for Databricks Mosaic AI model serving and embeddings, alongside the existing Databricks catalog and dataset integrations. It adds configurable service ports in the Helm chart and resolves several bugs to improve stability and performance.

Highlights in v1.2.2​

  • Databricks Model & Embedding Provider: Spice integrates with Databricks Model Serving for models and embeddings, enabling secure access via machine-to-machine (M2M) OAuth authentication with service principal credentials. The runtime automatically refreshes tokens using databricks_client_id and databricks_client_secret, ensuring uninterrupted operation. This feature supports Databricks-hosted large language models and embedding models.

    models:
    - from: databricks:databricks-llama-4-maverick
    name: llama-4-maverick
    params:
    databricks_endpoint: dbc-46470731-42e5.cloud.databricks.com
    databricks_client_id: ${secrets:DATABRICKS_CLIENT_ID}
    databricks_client_secret: ${secrets:DATABRICKS_CLIENT_SECRET}

    embeddings:
    - from: databricks:databricks-gte-large-en
    name: gte-large-en
    params:
    databricks_endpoint: dbc-42424242-4242.cloud.databricks.com
    databricks_client_id: ${secrets:DATABRICKS_CLIENT_ID}
    databricks_client_secret: ${secrets:DATABRICKS_CLIENT_SECRET}

    For detailed setup instructions, refer to the Databricks Model Provider documentation.

  • Configurable Helm Chart Service Ports: The Helm chart now supports custom ports for flexible network configurations for deployments. Specify non-default ports in your Helm values file.

  • Resolved Issues:

    • MCP Nested Tool Calling: Fixed a bug preventing nested tool invocation when Spice operates as the MCP server federating to MCP clients.

    • Dataset Load Concurrency: Corrected a failure to respect the dataset_load_parallelism setting during dataset loading.

    • Acceleration Hot-Reload: Addressed an issue where changes to acceleration enable/disable settings were not detected during hot reload of Spicepod.yaml.

Contributors​

Breaking Changes​

No breaking changes.

Cookbook Updates​

Updated cookbooks:

The Spice Cookbook now includes 68 recipes to help you get started with Spice quickly and easily.

Upgrading​

To upgrade to v1.2.2, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.2.2 image:

docker pull spiceai/spiceai:1.2.2

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

What's Changed​

Dependencies​

  • No major dependency changes.

Changelog​

- Update spark-connect-rs to override user agent string by @ewgenius in https://github.com/spiceai/spice/pull/5798
- Merge pull request by @ewgenius in https://github.com/spiceai/spice/pull/5796
- Pass the default user agent string to the Databricks Spark, Delta, and Unity clients by @ewgenius in https://github.com/spiceai/spice/pull/5717
- bump to 1.2.2 by @Jeadie in https://github.com/spiceai/spice/pull/none
- Helm chart: support for service ports overrides by @sgrebnov in https://github.com/spiceai/spice/pull/5774
- Update spice cli login command with client-id and client-secret flags for Databricks by @ewgenius in https://github.com/spiceai/spice/pull/5788
- Fix bug where setting Cache-Control: no-cache doesn't compute the cache key by @phillipleblanc in https://github.com/spiceai/spice/pull/5779
- Update to datafusion-contrib/datafusion-table-providers#336 by @phillipleblanc in https://github.com/spiceai/spice/pull/5778
- Lru cache: limit single cached record size to u32::MAX (4GB) by @sgrebnov in https://github.com/spiceai/spice/pull/5772
- Fix LLMs calling nested MCP tools by @Jeadie in https://github.com/spiceai/spice/pull/5771
- MySQL: Set the character_set_results/character_set_client/character_set_connection session variables on connection setup by @Sevenannn in https://github.com/spiceai/spice/pull/5770
- Control the parallelism of acceleration refresh datasets with runtime.dataset_load_parallelism by @phillipleblanc in https://github.com/spiceai/spice/pull/5763
- Fix Iceberg predicates not matching the Arrow type of columns read from parquet files by @phillipleblanc in https://github.com/spiceai/spice/pull/5761
- fix: Use decimal_cmp for numerical BETWEEN in SQLite by @peasee in https://github.com/spiceai/spice/pull/5760
- Support product name override in databricks user agent string by @ewgenius in https://github.com/spiceai/spice/pull/5749
- Databricks U2M Token Provider support by @ewgenius in https://github.com/spiceai/spice/pull/5747
- Remove HTTP auth from LLM config and simplify Databricks models logic by using static headers by @Jeadie in https://github.com/spiceai/spice/pull/5742
- clear plan cache when dataset updates by @kczimm in https://github.com/spiceai/spice/pull/5741
- Support Databricks M2M auth in LLMs + Embeddings by @Jeadie in https://github.com/spiceai/spice/pull/5720
- Retrieve Github App tokens in background; make TokenProvider not async by @Jeadie in https://github.com/spiceai/spice/pull/5718
- Make 'token_providers' crate by @Jeadie in https://github.com/spiceai/spice/pull/5716
- Databricks AI: Embedding models & LLM streaming by @Jeadie in https://github.com/spiceai/spice/pull/5715

See the full list of changes at: v1.2.1...v1.2.2

Spice v1.0.6 (Mar 17, 2025)

Β· 4 min read
Sergei Grebnov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.0.6 ⚑

Spice v1.0.6 improves stability for DuckDB acceleration, Iceberg Data/Catalog connector improvements when using AWS Glue, and fixes an issue with the ready_state: on_registration federation fallback when using DuckDB. In addition, redundant data refreshes on startup are avoided for accelerations with persistent data.

Highlights in v1.0.6​

  • Iceberg Data/Catalog Connector Improvements: Improves Iceberg data & catalog connector reliability, including bug fixes for AWS Glue API rate-limiting and compatibility, REST API pagination support, explicit AWS credential handling, and support for AWS STS role assumption.

  • Fixes On-Registration Fallback when using DuckDB: Previously, when using DuckDB as a data accelerator and the ready_state: on_registration configuration, queries made during the initial data refresh did not properly fallback to the federated source. This is now fixed.

  • DuckDB downgraded for Stability: DuckDB has been downgraded to v1.1.3 due to a regression in memory handling tracked by duckdb/duckdb issue #16640. Once resolved and validated, Spice will re-upgrade to v1.2.x.

  • Expanded Integration Tests: Additional integration tests covering federated accelerator behavior and graceful shutdown processes have been added.

  • Optimized Data Refresh for Persistent Accelerations: Changed behavior in v1.0.6. When using persistent (file-mode) acceleration without a defined refresh interval, Spice performs a full refresh at startup only if no previously accelerated data is available. This ensures efficient startup behavior by avoiding unnecessary refreshes. This logic applies only to full refreshes when no refresh interval is specified.

To maintain the previous behavior and always refresh on every startup, set:

acceleration:
refresh_on_startup: always

Contributors​

  • @peasee
  • @phillipleblanc
  • @sgrebnov
  • @lukekim
  • @Sevenannn

Breaking Changes​

Starting from v1.0.6 when using persistent (file-mode) acceleration without a defined refresh interval, Spice performs a full refresh at startup only if no previously accelerated data is available. To maintain the previous behavior and always refresh on every startup, set:

acceleration:
refresh_on_startup: always

Cookbook Updates​

No new recipes.

Upgrading​

To upgrade to v1.0.6, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.0.6 image:

docker pull spiceai/spiceai:1.0.6

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

What's Changed​

Dependencies​

Changelog​

  • Implement proper ready_state: on_registration for federation enabled accelerators by @phillipleblanc in #5019
  • Add indexes and primary keys mismatch detection for DuckDB Acceleration by @sgrebnov in #5045
  • Add comprehensive integration tests for the ready_state behavior by @phillipleblanc in #5042
  • Add test Spicepod for acceleration with constraints by @sgrebnov in #4891
  • Add test Spicepod for DuckDB append acceleration with constraints by @sgrebnov in #4898
  • Add DuckDB graceful shutdown test to E2E CI tests by @sgrebnov in #5047
  • Update duckdb_append_with_pk_and_indexes.yaml (work for duckdb 1.1.x) by @sgrebnov in #5067
  • fix: Downgrade to DuckDB 1.1.3 by @peasee in #5055
  • fix: Acceleration federation integration test by @peasee in #5070
  • Improvements to Iceberg Catalog/Data Connector by @phillipleblanc in #5071
  • Add Results-Cache-Status to indicate query result came from cache by @phillipleblanc in #4809
  • fix: Spice.ai schema inference by @peasee in #4674
  • Add refresh_on_startup Spicepod configuration param by @phillipleblanc and @sgrebnov in #5086
  • Test restart behavior of DuckDB file acceleration against glue iceberg table by @Sevenannn #5075
  • Run Iceberg Data Connector - DuckDB File mode integration test by @Sevenannn #5069
  • Integration test for glue iceberg catalog by @Sevenannn #5077

Full Changelog: https://github.com/spiceai/spiceai/compare/v1.0.5...v1.0.6

Spice v0.11.1-alpha (April 22, 2024)

Β· 3 min read
Luke Kim
Founder and CEO of Spice AI

The v0.11.1-alpha release introduces retention policies for accelerated datasets, native Windows installation support, and integration of catalog and schema settings for the Databricks Spark connector. Several bugs have also been fixed for improved stability.

Highlights​

  • Retention Policies for Accelerated Datasets: Automatic eviction of data from accelerated time-series datasets when a specified temporal column exceeds the retention period, optimizing resource utilization.

  • Windows Installation Support: Native Windows installation support, including upgrades.

  • Databricks Spark Connect Catalog and Schema Settings: Improved translation between DataFusion and Spark, providing better Spark Catalog support.

Contributors​

  • @phillipleblanc
  • @Jeadie
  • @ewgenius
  • @sgrebnov
  • @y-f-u
  • @lukekim
  • @digadeesh
  • @Sevenannn
  • @gloomweaver

New in this release​

What's Changed​

Full Changelog: https://github.com/spiceai/spiceai/compare/v0.11.0-alpha...v0.11.1-alpha

Resources​

Community​

Spice.ai started with the vision to make AI easy for developers. We are building Spice.ai in the open and with the community. Reach out on Discord or by email to get involved.