Skip to main content

31 posts tagged with "data-connector"

Data connector tools and integrations

View All Tags

Spice v1.11.4 (Mar 12, 2026)

· 5 min read
Sergei Grebnov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.11.4! ⚡

Spice v1.11.4 is a patch release improving S3 metadata column query robustness and enabling on_zero_results: use_source for accelerated views.

What's New in v1.11.4

Accelerated Views: on_zero_results: use_source Support

Accelerated views now support the on_zero_results: use_source configuration (#9699). Previously, accelerated views only supported on_zero_results: return_empty, which returned an empty result set when the accelerated data contained no matching rows. With this change, views can fall back to querying the source data when the accelerated query returns zero results, matching the behavior already available for accelerated datasets.

Example configuration:

views:
- name: sales_summary
sql: |
SELECT region, SUM(amount) as total
FROM sales
GROUP BY region
acceleration:
enabled: true
on_zero_results: use_source

How the Fallback Works

When an accelerated view is configured with on_zero_results: use_source, the following happens at query time:

  1. The accelerated store is queried first. The query runs against the view's accelerated data (e.g., Spice Cayenne, Arrow, DuckDB, or SQLite).

  2. If the accelerated query returns zero rows, the runtime falls back to re-executing the view's SQL query against the datasets it references.

  3. Referenced datasets are queried according to their own configuration. The view's SQL is re-executed against each referenced dataset as it is configured. This means:

    • If a referenced dataset is accelerated, the query hits that dataset's accelerated store — not the raw data source.
    • If a referenced dataset is accelerated with on_zero_results: use_source and its accelerated store also returns zero rows, it will independently fall back to its own federated data source (e.g., Postgres, S3, etc.).
    • If a referenced dataset is federated (not accelerated), the query goes directly to the data source.

This means the fallback can chain through multiple layers: first the view's acceleration, then each referenced dataset's acceleration, and finally the original data source — each layer independently applying its own on_zero_results behavior.

Example: Multi-layer fallback

datasets:
- from: postgres:orders
name: orders
acceleration:
enabled: true
refresh_sql: "SELECT * FROM orders WHERE created_at > now() - interval '7 days'"
on_zero_results: use_source # Falls back to Postgres if accelerated data has no matches

views:
- name: recent_orders_summary
sql: |
SELECT status, COUNT(*) as order_count
FROM orders
GROUP BY status
acceleration:
enabled: true
on_zero_results: use_source # Falls back to re-running the SQL against referenced datasets

In this example, a query like SELECT * FROM recent_orders_summary WHERE status = 'cancelled' follows this path:

  1. Queries recent_orders_summary in the view's accelerated store (DuckDB/SQLite).
  2. If zero rows are returned, re-executes SELECT status, COUNT(*) ... FROM orders GROUP BY status against the orders dataset.
  3. Since orders is accelerated, the query hits the orders accelerated store.
  4. If orders also returns zero rows (e.g., the refresh_sql excluded cancelled orders), it falls back to querying Postgres directly.

S3 Data Connector: More Robust Metadata Column Handling

Improved the robustness of metadata column (location, last_modified, size) handling for S3 datasets. Building on the v1.11.3 release, this update addresses an additional edge case where the query optimizer's projection swap could cause an index out of bounds panic when metadata columns are referenced in projections with filters or scalar functions.

Contributors

Breaking Changes

No breaking changes.

Cookbook Updates

No new cookbook recipes.

The Spice Cookbook includes 86 recipes to help you get started with Spice quickly and easily.

Upgrading

To upgrade to v1.11.4, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.11.4 image:

docker pull spiceai/spiceai:1.11.4

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai --version 1.11.4

AWS Marketplace:

Spice is available in the AWS Marketplace.

What's Changed

Changelog

  • fix(s3): Make metadata column handling more robust by @sgrebnov in #9714
  • feat(views): Enable on_zero_results: use_source for accelerated views by @krinart in #9699

Full Changelog: https://github.com/spiceai/spiceai/compare/v1.11.3...v1.11.4

Spice v1.11.3 (Mar 9, 2026)

· 3 min read
Phillip LeBlanc
Co-Founder and CTO of Spice AI

Announcing the release of Spice v1.11.3! 🛠️

Spice v1.11.3 is a patch release fixing schema consistency issues in the S3 and FlightSQL data connectors, improving CDC cache invalidation, and enhancing the HTTP data connector's error handling and response metadata.

What's New in v1.11.3

S3 Data Connector Fix

Fixed an issue where queries using metadata columns (location, last_modified, size) on S3 datasets produced Input field name does not match with the projection expression errors (#9647). This occurred when projecting metadata columns with filters or scalar functions (e.g., SELECT lower(location) FROM table WHERE location = '...'), and when projection returned no matching files.

FlightSQL Schema Consistency

Fixed an issue where the Flight SQL JDBC driver returned Unsupported ArrowType Utf8View errors when performing ::TEXT type casts (#9253). The FlightSQL endpoint now maps view types (e.g., Utf8View, BinaryView) to their non-view equivalents, ensuring compatibility with JDBC and ODBC clients.

CDC Cache Invalidation

Fixed an issue where the SQL results cache was invalidated on every change stream poll, even when zero records were returned (#9472). This caused near-total cache miss rates for datasets using refresh_mode: changes (e.g., DynamoDB Streams), effectively rendering the cache useless. Cache invalidation now only occurs when a change batch contains actual data changes.

HTTP Data Connector Improvements

  • HTTP error responses (e.g., 5xx) are now excluded from the cache, preventing transient server errors from polluting cached results.
  • Added a response_headers column (Map type) to HTTP responses, providing access to response header metadata in query results.

Contributors

Breaking Changes

No breaking changes.

Cookbook Updates

No new cookbook recipes.

The Spice Cookbook includes 86 recipes to help you get started with Spice quickly and easily.

Upgrading

To upgrade to v1.11.3, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.11.3 image:

docker pull spiceai/spiceai:1.11.3

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai --version 1.11.3

AWS Marketplace:

Spice is available in the AWS Marketplace.

What's Changed

Changelog

  • fix(s3): Fix metadata column schema mismatches in projected queries by @sgrebnov in #9664
  • s3_metadata_columns tests: include test for location outside table prefix by @sgrebnov in #9676
  • Fix Flight SQL schema consistency: expand view types and verify field names by @sgrebnov in #9438
  • Improve CDC cache invalidation by @krinart in #9651
  • Skip caching http error response + add response_headers by @krinart in #9670

Full Changelog: https://github.com/spiceai/spiceai/compare/v1.11.2...v1.11.3

Spice v2.0-rc.1 (Mar 4, 2026)

· 23 min read
Sergei Grebnov
Senior Software Engineer at Spice AI

Announcing the release of Spice v2.0-rc.1! 🚀

v2.0.0-rc.1 is the first release candidate for early testing of v2.0.

Highlights in this release candidate include:

  • Active-Active Highly-Available Distributed Query that is object-store-native and built on Apache Ballista, with dynamic cluster sizing, distributed ingestion, and cluster observability
  • Spice Cayenne RC with staged append writes, file-based retention deletes, composite partitioning, and distributed ingestion
  • DataFusion v52.2.0 Upgrade with sort pushdown, a new merge join, and dynamic filters
  • DDL Support for CREATE TABLE and DROP TABLE via SQL for Iceberg and Cayenne catalogs
  • DuckLake Catalog & Data Connector for lakehouse-style data management
  • GCS Data Connector (Alpha) for Google Cloud Storage
  • Rust CLI Rewrite for a unified single-binary experience
  • Dependency upgrades including DuckDB v1.4.4, delta_kernel v0.18.2, and mistral.rs

Spice v2.0 includes several breaking changes. Review the breaking changes section before upgrading.

Distribution Changes

AI/ML support including local LLM/ML model and hosted LLM inference is now included in the default Spice build and image. The separate models build variant has been removed.

With models now included by default, the data-only distribution (without AI/ML support) is only published in nightly builds. Official production-ready data-only distributions are available exclusively through Spice Cloud and the Enterprise release.

A new Network Attached Storage (NAS) distribution with built-in SMB and NFS data connector support is also now available in nightly builds and with Spice.ai Enterprise.

Distribution / VariantOpen SourceSpice CloudEnterprise
Default
DataNightly only
NAS (SMB + NFS)Nightly only
Metal (macOS)
CUDA (Linux)Nightly only
Allocator variantsNightly only
ODBC connectorLocal build only

For more details, see the Distributions documentation.

What's New in v2.0.0-rc.1

Active-Active HA Distributed Query

Distributed Query exits Beta with active-active highly-available object-store-based distributed query.

Distributed query supports two execution modes:

  • Synchronous: Queries for accelerated datasets are distributed across executors and results are streamed back in real-time. Non-accelerated datasets execute only on the scheduler. Best for interactive queries where low latency is critical.
  • Asynchronous: Queries are submitted via the new HTTP-only /v1/queries API and results are materialized to object storage for later retrieval. Best for long-running analytical workloads, batch processing, and non-accelerated datasets in distributed mode.

Key improvements:

  • Dynamic Cluster Sizing: The query planner automatically adjusts parallelism based on the number of active executors in the cluster, ensuring optimal resource utilization as nodes are added or removed.
  • Distributed Ingestion: Data ingestion for partitioned accelerated tables is now distributed across executor nodes, enabling higher throughput and parallel data loading in cluster mode. Regular (non-partitioned) accelerated tables do not distribute ingestion loads.
  • Synchronous Execution on Scheduler: /v1/sql and FlightSQL queries now execute synchronously on the scheduler when appropriate, reducing inter-node overhead for queries that don't benefit from distribution.
  • Faster Failure Detection: Executor heartbeat timeout reduced from 180s to 30s, enabling the cluster to quickly detect and respond to executor failures.
  • Cluster Observability: New metrics and Grafana dashboard for monitoring distributed query clusters.

Spice Cayenne Improvements

The Spice Cayenne data accelerator exits Beta with significant reliability and performance improvements:

  • Staged Append Writes: WAL-based staged append writes prevent partial writes and data loss on stream errors. Batches are written to a WAL file before being committed, ensuring atomicity.
  • File-Based Retention Deletes: Time-based retention now supports file-level deletes for both position-based and primary-key tables, reducing I/O overhead compared to row-level deletion.
  • Multiple Partition Expressions: Support for composite partitioning with partition_by: [col1, col2] using hierarchical path-like keys (e.g., 2025/10/15).
  • Distributed Ingestion: Cayenne catalog now supports distributed ingestion across executor nodes in cluster mode, including UPDATE operations.
  • Improved Robustness: Fixed CDC edge case where DELETE + UPSERT sequences could produce duplicate primary keys across protected snapshots. Improved upsert handling during runtime restarts.

DataFusion v52.2.0 Upgrade

Apache DataFusion has been upgraded to v52.2.0, bringing significant performance improvements, new query features, and enhanced extensibility.

Performance Improvements:

  • Faster CASE Expressions: Lookup-table-based evaluation for certain CASE expressions avoids repeated evaluation, accelerating common ETL patterns
  • MIN/MAX Aggregate Dynamic Filters: Queries with MIN/MAX aggregates now create dynamic filters during scan to prune files and rows as tighter bounds are discovered during execution
  • New Merge Join: Rewritten sort-merge join (SMJ) operator with speedups of three orders of magnitude in pathological cases (e.g., TPC-H Q21: minutes → milliseconds)
  • Caching Improvements: New statistics cache for file metadata avoids repeatedly recalculating statistics, significantly improving planning time. A prefix-aware list-files cache accelerates evaluating partition predicates for Hive partitioned tables
  • Improved Hash Join Filter Pushdown: Build-side hash map contents are now passed dynamically to probe-side scans for pruning files, row groups, and individual rows

Major Features:

  • Sort Pushdown to Scans: Sorts are pushed into data sources, enabling ~30x performance improvement on pre-sorted data with top-K queries. Parquet scans now reverse row group order for DESC queries on ASC-sorted files
  • TableProvider supports DELETE and UPDATE: New hooks for DELETE and UPDATE statements in the TableProvider trait, enabling Iceberg and Cayenne connectors to implement SQL DELETE and UPDATE operations
  • More Extensible SQL Planning: New RelationPlanner API for extending SQL planning for FROM clauses, enabling support for vendor-specific SQL dialects

DDL Support for Iceberg and Cayenne

SQL Schema Management: Spice now supports CREATE TABLE and DROP TABLE DDL operations for Iceberg and Cayenne catalogs via FlightSQL and the /v1/sql API. DML validation has been updated for catalog-level writability.

DuckLake Catalog & Data Connector

Lakehouse-Style Data Management: New DuckLake catalog and data connector enable lakehouse-style data management with DuckDB as the metadata catalog and object storage for data files. DuckLake provides ACID transactions, time travel, and schema evolution on top of Parquet files.

GCS Data Connector (Alpha)

Google Cloud Storage Support: New Google Cloud Storage data connector enables federated queries against data stored in GCS buckets, with Iceberg table support.

Rust CLI Rewrite

Unified Single-Binary Experience: The Spice CLI has been completely rewritten from Go to Rust, eliminating the Go dependency and providing a single spice binary built from the same codebase as spiced. This improves startup performance, reduces distribution size, and ensures consistent behavior between CLI and runtime.

Key Features:

  • Full Feature Parity: All 27+ CLI commands re-implemented in Rust with identical behavior
  • New spice query Command: Interactive REPL for async queries via the /v1/queries API with multi-line SQL input, spinner progress indicator, Ctrl+C cancellation, and partial query ID matching
  • --output=json Flag: Machine-readable JSON output for CLI commands, enabling scripting and automation
  • spice login --output: New output modes (env, json, keychain) for flexible credential management
  • spice cloud metrics: New command for Spice Cloud deployment metrics

Models Included by Default

Local LLM/ML model inference (via mistral.rs) is now included in the default Spice build. The separate models build variant has been removed. This simplifies installation and ensures all users have access to local AI inference capabilities.

Error Propagation for Dataset and Model Status APIs

The /v1/datasets and /v1/models APIs now return structured error information when a component is in an Error state. The ?status=true query parameter must be passed to retrieve the real-time component status, including the error state and details. Previously, the status field only indicated Error with no further detail. Now, two new fields are included when ?status=true is specified:

  • error: A structured object with category, type, and code fields for programmatic error handling (e.g. { "category": "dataset", "type": "auth", "code": "dataset.auth" }).
  • error_message: A human-readable description of why the component entered an error state.

These fields are only present when ?status=true is passed and the component is in an error state.

Example /v1/datasets?status=true response:

[
{
"from": "postgres:syncs",
"name": "daily_journal",
"replication_enabled": false,
"acceleration_enabled": true,
"status": "Ready"
},
{
"from": "databricks:hive_metastore.default.messages",
"name": "messages",
"replication_enabled": false,
"acceleration_enabled": true,
"status": "Error",
"error": {
"category": "dataset",
"type": "auth",
"code": "dataset.auth"
},
"error_message": "Unable to authenticate with datasource credentials"
}
]

The spice datasets and spice models CLI commands now include an ERROR column that displays the error message for any component in an error state.

Additional Dependency Upgrades

DependencyVersion
Ballistav52.0.0
DuckDBv1.4.4
delta_kernelv0.18.2
mistral.rsv0.7.0 (candle fork removed, now uses candle 0.9.2 from crates.io)
Turso (libsql)v0.4.4
VortexUpgraded with CASE-WHEN support
AWS SDKMultiple crates updated + APN user-agent support

Other Improvements

  • Spicepod v2 Support: Spicepods now support version v2, and spice init generates spicepod.yaml files with version: v2 by default while maintaining backward compatibility for existing v1 spicepods.
  • x.ai Models: x.ai models now exclusively use the /v1/responses endpoint with rate limiting support.
  • HuggingFace Chat Templates: Added support for chat templates in HuggingFace model configurations.
  • Databricks SQL Dialect: Added Databricks SQL dialect for DataFusion unparser, improving federation query generation.
  • Snowflake: Added snowflake_private_key parameter for key-pair authentication.
  • Acceleration Metrics: New rows_written, bytes_written, and dataset_acceleration_size_bytes metrics for acceleration refresh ingestion.
  • Refresh SQL UDFs: Core scalar UDFs are now enabled in refresh SQL expressions.
  • FlightSQL: Fixed TLS connection handling for grpc+tls:// endpoints with custom CA certificate support.
  • FlightSQL: Fixed schema consistency by expanding view types and verifying field names.
  • Hash Index: Fixed query correctness when hash index is used with additional filters.
  • Results Cache: Fixed schema preservation for empty query results.
  • Query Nullability: Reconciled execution stream nullability with logical plan schema.
  • Schema Evolution: Graceful handling of schema evolution mismatch errors during data refresh.
  • Internal YAML Parser: Replaced deprecated serde_yaml with an internal YAML implementation.

Spicepod v1 to v2 Changes

Spicepod v2 introduces configuration improvements while maintaining backward compatibility with v1. Existing v1 spicepods continue to work — deprecated fields are automatically migrated at load time.

Version support:

VersionStatus
v2Default. Used by spice init.
v1Supported. Deprecated fields auto-migrate.
v1beta1Removed. No longer accepted.

Configuration changes:

v1 (deprecated)v2 (preferred)Notes
runtime.results_cacheruntime.caching.sql_resultsAll fields migrate automatically. cache_max_sizemax_size.
runtime.memory_limitruntime.query.memory_limitAuto-migrated. query.memory_limit takes priority if both set.
runtime.temp_directoryruntime.query.temp_directoryAuto-migrated. query.temp_directory takes priority if both set.
dataset.invalid_type_actiondataset.unsupported_type_actionAuto-migrated. v2 adds a new string variant.

New v2 fields:

  • runtime.ready_state — Controls when the runtime reports ready (on_load default, or on_registration).
  • runtime.flight.do_put_rate_limit_enabled — Enable/disable FlightSQL DoPut rate limiting (default: true).
  • runtime.query.spill_compression — Compression for query spill files (e.g., lz4_frame).
  • runtime.scheduler.partition_management — Configure partition assignment interval, limits, and timeouts for distributed mode.
  • runtime.caching.sql_results.stale_while_revalidate_ttl — Serve stale cached results while revalidating in the background.
  • runtime.caching.sql_results.encoding — Cache entry compression (e.g., zstd).
  • catalog.access: read_write_create — New access mode for catalogs that support DDL operations.

Migration note: When both the deprecated v1 field and its v2 equivalent are set, the v2 field takes priority.

Contributors

Breaking Changes

  • Cayenne and Distributed Query exit Beta: Beta warnings have been removed from documentation and code. Both features are now considered GA-ready.
  • Models included by default: The separate models build variant has been removed. Local LLM inference is now always included.
  • Spicepod version defaults to v2: New spicepods created with spice init now default to version: v2. Existing v1 spicepods remain supported, and v1beta1 is no longer accepted.
  • Windows native builds removed: Native Windows builds are no longer provided. Use WSL for local development instead.
  • Metric renames: accelerated_refresh metrics renamed to acceleration_refresh for consistency. last_refresh_time gauge renamed to include milliseconds unit.
  • Caching config renamed: ResultsCache replaced with SQLResultsCacheConfig in configuration.
  • DuckDB parameter rename: partitioned_write_flush_threshold renamed to partitioned_write_flush_threshold_rows.
  • v1/search API: The /v1/search API now always returns an array in matches, even for single results.
  • x.ai model endpoint: x.ai models now exclusively use the /v1/responses endpoint.
  • Error messages: Error messages across S3 Vectors, ScyllaDB, Snowflake, ClickHouse, and other components have been refactored for clarity and consistency.

Cookbook Updates

New and updated Spice Cookbook recipes:

  • Async Queries: Submit long-running queries asynchronously and retrieve results later.
  • DuckLake Catalog Connector: Use DuckLake for lakehouse-style data management with ACID transactions and time travel.

The Spice Cookbook includes 88 recipes to help you get started with Spice quickly and easily.

Upgrading

To upgrade to v2.0.0-rc.1, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:2.0.0-rc.1 image:

docker pull spiceai/spiceai:2.0.0-rc.1

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai --version 2.0.0-rc.1

AWS Marketplace:

Spice is available in the AWS Marketplace.

What's Changed

Changelog

  • Add TPC-DS integration tests with S3 source and PostgreSQL acceleration by @phillipleblanc in #9006
  • fix(tests): fix flaky/slow/failing unit tests by @phillipleblanc in #9009
  • fix: Update benchmark snapshots for DF51 upgrade by @app/github-actions in #9008
  • fix: add feature gate to rrf TEST_EMBEDDING_MODEL by @phillipleblanc in #9017
  • fix: features check by @phillipleblanc in #9014
  • fix: Enable Cayenne acceleration snapshots by @lukekim in #9020
  • URL table support by @lukekim in #9018
  • ScyllaDB key filter by @lukekim in #8997
  • fix: Schema mismatch when using column projection with HTTP caching by @phillipleblanc in #9021
  • Add more tests for HTTP caching with columns selection by @sgrebnov in #9025
  • HTTP cache snapshots: default to time_interval and fix snapshots_creation_policy: on_change by @sgrebnov in #9026
  • Fix duplicate snapshot creation on startup by @sgrebnov in #9029
  • Add ScyllaDB and SMB to the README table by @krinart in #9034
  • Remove waiting for runtime to be ready before creating snapshot by @krinart in #9033
  • Fix snapshot on_change policy to skip when no writes occurred by @sgrebnov in #9028
  • Release notes for release release/1.11.0-rc.2 by @krinart in #9016
  • ci: use arduino/setup-protoc for official protobuf compiler by @phillipleblanc in #9036
  • ci: install unzip on aarch64 runner for arduino/setup-protoc by @phillipleblanc in #9038
  • fix: don't fail release if upload to minio fails by @phillipleblanc in #9039
  • Add missing protoc step to setup-cc action by @krinart in #9041
  • fix: Update Search integration test snapshots by @app/github-actions in #9013
  • Fix formula_1 and codebase_community in bird-bench by @Jeadie in #9000
  • Cayenne S3 Express One Zone improvements by @lukekim in #9015
  • Add zlib1g-dev to CI by @lukekim in #9052
  • Improve validation and logging for hash indexes by @lukekim in #9047
  • Upgrade Vortex with CASE-WHEN by @lukekim in #9051
  • x.ai models now exclusively use /v1/responses endpoint by @lukekim in #9400
  • Improvements for snapshot schema comparison by @krinart in #9401
  • v2.0 breaking changes by @lukekim in #9233
  • Create PartitionManagementTask for scheduler to update accelerated table partition assignments by @Jeadie in #9378
  • refactor(Cayenne): route all write orchestration through CayenneDataSink by @sgrebnov in #9402
  • Refactor benchmark to use QueryExecutor trait by @Jeadie in #9418
  • feat: Add spidapter build and release workflow by @peasee in #9427
  • Testoperator: add support for api-key when connecting to external spice instance by @sgrebnov in #9421
  • Initial implementation of Ducklake catalog & data connectors by @lukekim in #9083
  • Require aws_lc_rs since jsonwebtoken upgrade by @Jeadie in #9426
  • feat: Add spidapter tool by @peasee in #9425
  • Add release notes for 1.11.2 patch release by @sgrebnov in #9430
  • feat(spidapter): integrate system-adapter-protocol with SCP provisioning by @phillipleblanc in #9434
  • Add DuckLake TPCH E2E workflow and federated Spicepod configuration by @lukekim in #9431
  • fix(spidapter): use Flight handshake auth instead of x-api-key header by @phillipleblanc in #9435
  • [spidapter] Keep only what sparks joy by @Jeadie in #9439
  • Refactor binary operator balancing by @Jeadie in #9424
  • feat: Add Iceberg DDL support (CREATE TABLE / DROP TABLE) for default catalog override by @phillipleblanc in #9440
  • Fix Flight SQL schema consistency: expand view types and verify field names by @sgrebnov in #9438
  • Update spidapter for new system-adapter-protocol by @sgrebnov in #9442
  • docs: fix typos and syntax errors in style guide and error handling docs by @cluster2600 in #9445
  • Add acceleration refresh ingestion metrics (rows_written, bytes_written) by @phillipleblanc in #9461
  • Refactor(Cayenne): Replace CatalogError and string based errors with Snafu errors by @sgrebnov in #9403
  • Replace deprecated claude-3-5-haiku-latest with claude-haiku-4-5 by @Jeadie in #9492
  • Fix #9481: Preserve schema in results cache for empty query results by @phillipleblanc in #9485
  • Fix partition by serializing by @Jeadie in #9474
  • query: reconcile execution stream nullability with logical plan schema by @phillipleblanc in #9486
  • initial spice-cloud-client crate and spice cloud metrics --app <app-name>. by @Jeadie in #9480
  • feat: Return dataset error message in datasets API by @peasee in #9487
  • Spicebench by @lukekim in #9447
  • build(deps): consolidate dependabot dependency updates by @phillipleblanc in #9504
  • fix(cluster): route non-partitioned accelerated tables in distributed mode by @phillipleblanc in #9508
  • Enable core scalar UDFs in refresh SQL by @sgrebnov in #9502
  • Fix metrics in Spidapter again by @Jeadie in #9497
  • fix(cluster): tolerate Completed->status propagation race in distributed query handle by @phillipleblanc in #9510
  • feat: Support distributed ingestion in cayenne catalog by @peasee in #9506
  • Fix Cayenne duplicate primary keys after DELETE + UPSERT CDC sequences by @krinart in #9494
  • fix(cluster): rewrite table scans inside subqueries for distributed execution by @phillipleblanc in #9518
  • fix: Set catalog mode to readwritecreate in spidapter by @peasee in #9519
  • Upgrade AWS SDK crates & set APN user-agent in AWS SDK credential bridge by @lukekim in #8328
  • feat(runtime): add runtime ready_state on_registration semantics by @lukekim in #9522
  • fix: Add spidapter post-setup retries by @peasee in #9526
  • Make partition discovery more robust and make initialization non-blocking by @sgrebnov in #9499
  • Make lint-rust-fix support targeted packages and features by @Jeadie in #9511
  • Handle new Cloud SCP API by @Jeadie in #9532
  • Refactor and simplify streaming benchmarks by @krinart in #9405
  • fix: ensure spidapter only increments attempts on failures by @peasee in #9534
  • feat: Support specifying app resources in spidapter by @peasee in #9536
  • test(runtime): Spice Cayenne DDL integration test by @lukekim in #9535
  • fix: Handle schema evolution mismatch errors during data refresh by @lukekim in #9527
  • fix: resolve clippy lint warnings by @phillipleblanc in #9547
  • pr-builds --tag <TAG> for build_and_release.yml by @Jeadie in #9507
  • Add --output flag to spice login with env/json/keychain modes by @Jeadie in #9541
  • Don't use 'PartitionedTableScanRewrite' in async distributed query by @Jeadie in #9548
  • feat(spidapter): add local backend mode with single executor by @phillipleblanc in #9531
  • support chat template in HF by @Jeadie in #9543
  • fix(cayenne): stream PK retention deletes and run OOM regression in CI by @phillipleblanc in #9533
  • cayenne: Staged append writes to prevent partial writes and data loss on stream error by @sgrebnov in #9491
  • AcceleratedTable::scan use FederatedTable::scan when ClusterRole::Scheduler by @Jeadie in #9550
  • Upgrade to delta-kernel-rs v0.18.2 by @lukekim in #9528
  • Run cayenne tests as part of PR CI by @sgrebnov in #9554
  • Upgrade to DataFusion v52.2.0 by @lukekim in #9419
  • Remove Snapshot Compaction + Add snapshot existence check by @krinart in #9523
  • Update dependencies by @lukekim in #9566
  • fix: Update benchmark snapshots by @app/github-actions in #9565
  • fix: Compare Cayenne table configuration on startup by @peasee in #9529
  • Make Refresh::refresh_sql more robust to alterations over time. by @Jeadie in #9549
  • fix: Update datafusion-table-providers dependency to latest revision by @lukekim in #9574
  • Unset AWS_ENDPOINT_URL when empty by @krinart in #9575
  • fix: allow BytesProcessedExec repartitioning for unordered input by @lukekim in #9540
  • Sanitize DataFusion errors by @lukekim in #9530
  • Add conditional logging for partition assignments by @Jeadie in #9577
  • use 'properly early exit on SIGTERM' by @Jeadie in #9573
  • Update datafusion to 52.2.0 by @phillipleblanc in #9582
  • Ensure we query one and only one partition per request by @Jeadie in #9416
  • feat: Add support for Spicepod version v2 by @lukekim in #9583
  • [SpiceDQ] Improve error messages; Avoid race condition on allocate_initial_partitions. by @Jeadie in #9579
  • Update ballista dependencies to latest 52.0.0 revision by @lukekim in #9581
  • Fix Databricks spark_connect mode always disabled by @phillipleblanc in #9586
  • Support partitioning in Arrow accelerator by @Jeadie in #9571
  • Fix spice query CLI response deserialization by @phillipleblanc in #9588
  • fix: Update benchmark snapshots by @app/github-actions in #9584
  • fix: Share RuntimeEnv across Cayenne read/write/delete paths for targeted list_files_cache invalidation by @sgrebnov in #9589
  • feat: Add file:// state_location support for async queries scheduler by @phillipleblanc in #9590
  • Update endgame links by @krinart in #9598

Full Changelog: https://github.com/spiceai/spiceai/compare/v1.11.2...v2.0.0-rc.1

Spice v1.11.1 (Feb 10, 2026)

· 4 min read
Jack Eadie
Token Plumber at Spice AI

Announcing the release of Spice v1.11.1! 🛠️

v1.11.1 is a patch release improving Spice Cayenne accelerator reliability and performance, enhancing DynamoDB Streams and HTTP data connectors, and fixing issues in Federated Task History and FlightSQL.

What's New in v1.11.1

Spice Cayenne Accelerator Improvements

This release includes stability and performance fixes for the Spice Cayenne accelerator:

  • Row-based Deletion Logic: Refactored row-based delete operations to use per-file deletion vectors with RoaringBitmap. Deletion scans now use Vortex-native streaming with filter pushdown and project only row indices, achieving zero data I/O for delete operations.
  • Constraints & On Conflict: constraints and on_conflict configurations are now automatically inferred from federated table metadata, enabling datasets like DynamoDB to work without explicitly defining primary_key in the Spicepod.
  • Partitioned Table Deletion: Fixed an issue where DELETE operations on partitioned Cayenne tables failed.
  • Data Integrity: Fixed two issues with acceleration snapshot handling: protected snapshots are now included in conflict detection keyset scans (preventing duplicate key creation during append refresh), and snapshot cleanup no longer deletes protected snapshots.

Data Connector Improvements

  • DynamoDB Streams: Added automatic re-bootstrapping when the stream lag exceeds DynamoDB shard retention (24h). Configurable via the new lag_exceeds_shard_retention_behavior parameter with values error (default), ready_before_load, or ready_after_load.
  • HTTP Connector: HTTP responses now include a response_status column (UInt16). 4xx responses (e.g., 404 Not Found) are treated as valid queryable data and cached normally. 5xx responses are retried with backoff, returned to the user, but excluded from the cache to prevent transient server errors from polluting cached results.

Other Improvements

  • Reliability: Added retries for SnapshotManager operations and general snapshot reliability improvements.
  • Reliability: Fixed handling of timestamp precision mismatches in query result caching.
  • Reliability: Fixed a double projection issue in federated task history queries that caused Schema error: project index out of bounds errors in cluster mode.
  • Developer Experience: Added cookie middleware support to the FlightSQL data connector.

Contributors

Breaking Changes

No breaking changes.

Cookbook Updates

No major cookbook updates. The Spice Cookbook includes 86 recipes to help you get started with Spice quickly and easily.

Upgrading

To upgrade to v1.11.1, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.11.1 image:

docker pull spiceai/spiceai:1.11.1

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai --version 1.11.1

AWS Marketplace:

Spice is available in the AWS Marketplace.

What's Changed

Changelog

  • Cayenne: row-based delete logic improvements by @sgrebnov in #9237
  • Proper support for constraints/on_conflict in Cayenne Acceleration by @krinart in #9335
  • Retries for SnapshotManager by @krinart in #9334
  • fix(cayenne): Include protected snapshots in conflict detection keyset scan by @sgrebnov in #9176
  • fix(cayenne): Fix data loss by preserving protected snapshots during cleanup by @sgrebnov in #9182
  • Simplify retention filter expressions before pushdown by @sgrebnov in #9244
  • Fix test_retention_complex_sql by @sgrebnov in #9270
  • runtime: avoid double projection in federated task history by @phillipleblanc in #9326
  • feat(http): Return all HTTP responses as data, skip caching 5xx by @sgrebnov in #9313
  • Snapshots Improvements by @krinart in #9318
  • fix(caching): Handle timestamp precision mismatch and add more tests by @sgrebnov in #9315
  • DynamoDB Streams Table Rebootstrapping by @krinart in #9305
  • Fix Cayenne partitioned table deletion support by @sgrebnov in #9267
  • FlightSQL: add cookie middleware support by @phillipleblanc in #9282
  • Apply SchemaCastScanExec before applying changes in process_upsert_batch by @krinart in #9297

Spice v1.11.0 (Jan 28, 2026)

· 58 min read
William Croxson
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.11.0-stable! ⚡

In Spice v1.11.0, Spice Cayenne reaches Beta status with acceleration snapshots, Key-based deletion vectors, and Amazon S3 Express One Zone support. DataFusion has been upgraded to v51 along with Arrow v57.2, and iceberg-rust v0.8.0. v1.11 adds several DynamoDB & DynamoDB Streams improvements such as JSON nesting, and adds significant improvements to Distributed Query with active-active schedulers and mTLS for enterprise-grade high-availability and secure cluster communication.

This release also adds new SMB, NFS, and ScyllaDB Data Connectors (Alpha), Prepared Statements with full SDK support (gospice, spice-rs, spice-dotnet, spice-java, spice.js, and spicepy), Google LLM Support for expanded AI inference capabilities, and significant improvements to caching, observability, and Hash Indexing for Arrow Acceleration.

What's New in v1.11.0

Spice Cayenne Accelerator Reaches Beta

Spice Cayenne has been promoted to Beta status with acceleration snapshots support and numerous performance and stability improvements.

Key Enhancements:

  • Key-based Deletion Vectors: Improved deletion vector support using key-based lookups for more efficient data management and faster delete operations. Key-based deletion vectors are more memory-efficient than positional vectors for sparse deletions.
  • S3 Express One Zone Support: Store Cayenne data files in S3 Express One Zone for single-digit millisecond latency, ideal for latency-sensitive query workloads that require persistence.

Improved Reliability:

  • Resolved FuturesUnordered reentrant drop crashes
  • Fixed memory growth issues related to Vortex metrics allocation
  • Metadata catalog now properly respects cayenne_file_path location
  • Added warnings for unparseable configuration values

For more details, refer to the Cayenne Documentation.

DataFusion v51 Upgrade

Apache DataFusion has been upgraded to v51, bringing significant performance improvements, new SQL features, and enhanced observability.

DataFusion v51 ClickBench Performance

Performance Improvements:

  • Faster CASE Expression Evaluation: Expressions now short-circuit earlier, reuse partial results, and avoid unnecessary scattering, speeding up common ETL patterns
  • Better Defaults for Remote Parquet Reads: DataFusion now fetches the last 512KB of Parquet files by default, typically avoiding 2 I/O requests per file
  • Faster Parquet Metadata Parsing: Leverages Arrow 57's new thrift metadata parser for up to 4x faster metadata parsing

New SQL Features:

  • SQL Pipe Operators: Support for |> syntax for inline transforms
  • DESCRIBE <query>: Returns the schema of any query without executing it
  • Named Arguments in SQL Functions: PostgreSQL-style param => value syntax for scalar, aggregate, and window functions
  • Decimal32/Decimal64 Support: New Arrow types supported including aggregations like SUM, AVG, and MIN/MAX

Example pipe operator:

SELECT * FROM t
|> WHERE a > 10
|> ORDER BY b
|> LIMIT 5;

Improved Observability:

  • Improved EXPLAIN ANALYZE Metrics: New metrics including output_bytes, selectivity for filters, reduction_factor for aggregates, and detailed timing breakdowns

Arrow 57.2 Upgrade

Apache Arrow has been upgraded to v57.2, bringing major performance improvements and new capabilities.

Arrow 57 Parquet Metadata Parsing Performance

Key Features:

  • 4x Faster Parquet Metadata Parsing: A rewritten thrift metadata parser delivers up to 4x faster metadata parsing, especially beneficial for low-latency use cases and files with large amounts of metadata
  • Parquet Variant Support: Experimental support for reading and writing the new Parquet Variant type for semi-structured data, including shredded variant values
  • Parquet Geometry Support: Read and write support for Parquet Geometry types (GEOMETRY and GEOGRAPHY) with GeospatialStatistics
  • New arrow-avro Crate: Efficient conversion between Apache Avro and Arrow RecordBatches with projection pushdown and vectorized execution support

DynamoDB Connector Enhancements

  • Added JSON nesting for DynamoDB Streams
  • Improved batch deletion handling

Distributed Query Improvements

High Availability Clusters: Spice now supports running multiple active schedulers in an active/active configuration for production deployments. This eliminates the scheduler as a single point of failure and enables graceful handling of node failures.

  • Multiple schedulers run simultaneously, each capable of accepting queries
  • Schedulers coordinate via a shared S3-compatible object store
  • Executors discover all schedulers automatically
  • A load balancer distributes client queries across schedulers

Example HA configuration:

runtime:
scheduler:
state_location: s3://my-bucket/spice-cluster
params:
region: us-east-1

mTLS Verification: Cluster communication between scheduler and executors now supports mutual TLS verification for enhanced security.

Credential Propagation: S3, ABFS, and GCS credentials are now automatically propagated to executors in cluster mode, enabling access to cloud storage across the distributed query cluster.

Improved Resilience:

  • Exponential backoff for scheduler disconnection recovery
  • Increased gRPC message size limit from 16MB to 100MB for large query plans
  • HTTP health endpoint for cluster executors
  • Automatic executor role inference when --scheduler-address is provided

For more details, refer to the Distributed Query Documentation.

iceberg-rust v0.8.0 Upgrade

Spice has been upgraded to iceberg-rust v0.8.0, bringing improved Iceberg table support.

Key Features:

  • V3 Metadata Support: Full support for Iceberg V3 table metadata format
  • INSERT INTO Partitioned Tables: DataFusion integration now supports inserting data into partitioned Iceberg tables
  • Improved Delete File Handling: Better support for position and equality delete files, including shared delete file loading and caching
  • SQL Catalog Updates: Implement update_table and register_table for SQL catalog
  • S3 Tables Catalog: Implement update_table for S3 Tables catalog
  • Enhanced Arrow Integration: Convert Arrow schema to Iceberg schema with auto-assigned field IDs, _file column support, and Date32 type support

Acceleration Snapshots

Acceleration snapshots enable point-in-time recovery and data versioning for accelerated datasets. Snapshots capture the state of accelerated data at specific points, allowing for fast bootstrap recovery and rollback capabilities.

Key Features:

  • Flexible Triggers: Configure when snapshots are created based on time intervals or stream batch counts
  • Automatic Compaction: Reduce storage overhead by compacting older snapshots (DuckDB only)
  • Bootstrap Integration: Snapshots can reset cache expiry on load for seamless recovery (DuckDB with Caching refresh mode)
  • Smart Creation Policies: Only create snapshots when data has actually changed

Example configuration:

datasets:
- from: s3://my-bucket/data.parquet
name: my_dataset
acceleration:
enabled: true
engine: cayenne
mode: file
snapshots: enabled
snapshots_trigger: time_interval
snapshots_trigger_threshold: 1h
snapshots_creation_policy: on_changed

Snapshots API and CLI: New API endpoints and CLI commands for managing snapshots programmatically.

CLI Commands:

# List all snapshots for a dataset
spice acceleration snapshots taxi_trips

# Get details of a specific snapshot
spice acceleration snapshot taxi_trips 3

# Set the current snapshot for rollback (requires runtime restart)
spice acceleration set-snapshot taxi_trips 2

HTTP API Endpoints:

MethodEndpointDescription
GET/v1/datasets/{dataset}/acceleration/snapshotsList all snapshots for a dataset
GET/v1/datasets/{dataset}/acceleration/snapshots/{id}Get details of a specific snapshot
POST/v1/datasets/{dataset}/acceleration/snapshots/currentSet the current snapshot for rollback

For more details, refer to the Acceleration Snapshots Documentation.

Caching Acceleration Mode Improvements

The Caching Acceleration Mode introduced in v1.10.0 has received significant performance optimizations and reliability fixes in this release.

Performance Optimizations:

  • Non-blocking Cache Writes: Cache misses no longer block query responses. Data is written to the cache asynchronously after the query returns, reducing query latency for cache miss scenarios.
  • Batch Cache Writes: Multiple cache entries are now written in batches rather than individually, significantly improving write throughput for high-volume cache operations.

Reliability Fixes:

  • Correct SWR Refresh Behavior: The stale-while-revalidate (SWR) pattern now correctly refreshes only the specific entries that were accessed instead of refreshing all stale rows in the dataset. This prevents unnecessary source queries and reduces load on upstream data sources.
  • Deduplicated Refresh Requests: Fixed an issue where JSON array responses could trigger multiple redundant refresh operations. Refresh requests are now properly deduplicated.
  • Fixed Cache Hit Detection: Resolved an issue where queries that didn't include fetched_at in their projection would always result in cache misses, even when cached data was available.
  • Unfiltered Query Optimization: SELECT * queries without filters now return cached data directly without unnecessary filtering overhead.

For more details, refer to the Caching Acceleration Mode Documentation.

Prepared Statements

Improved Query Performance and Security: Spice now supports prepared statements, enabling parameterized queries that improve both performance through query plan caching and security by preventing SQL injection attacks.

Key Features:

  • Query Plan Caching: Prepared statements cache query plans, reducing planning overhead for repeated queries
  • SQL Injection Prevention: Parameters are safely bound, preventing SQL injection vulnerabilities
  • Arrow Flight SQL Support: Full prepared statement support via Arrow Flight SQL protocol

SDK Support:

SDKSupportMin VersionMethod
gospice (Go)✅ Fullv8.0.0+SqlWithParams() with typed constructors (Int32Param, StringParam, TimestampParam, etc.)
spice-rs (Rust)✅ Fullv3.0.0+query_with_params() with RecordBatch parameters
spice-dotnet (.NET)✅ Fullv0.3.0+QueryWithParams() with typed parameter builders
spice-java (Java)✅ Fullv0.5.0+queryWithParams() with typed Param constructors (Param.int64(), Param.string(), etc.)
spice.js (JavaScript)✅ Fullv3.1.0+query() with parameterized query support
spicepy (Python)✅ Fullv3.1.0+query() with parameterized query support

Example (Go):

import "github.com/spiceai/gospice/v8"

client, _ := spice.NewClient()
defer client.Close()

// Parameterized query with typed parameters
results, _ := client.SqlWithParams(ctx,
"SELECT * FROM products WHERE price > $1 AND category = $2",
spice.Float64Param(10.0),
spice.StringParam("electronics"),
)

Example (Java):

import ai.spice.SpiceClient;
import ai.spice.Param;
import org.apache.arrow.adbc.core.ArrowReader;

try (SpiceClient client = new SpiceClient()) {
// With automatic type inference
ArrowReader reader = client.queryWithParams(
"SELECT * FROM products WHERE price > $1 AND category = $2",
10.0, "electronics");

// With explicit typed parameters
ArrowReader reader = client.queryWithParams(
"SELECT * FROM products WHERE price > $1 AND category = $2",
Param.float64(10.0),
Param.string("electronics"));
}

For more details, refer to the Parameterized Queries Documentation.

Spice Java SDK v0.5.0

Parameterized Query Support for Java: The Spice Java SDK v0.5.0 introduces parameterized queries using ADBC (Arrow Database Connectivity), providing a safer and more efficient way to execute queries with dynamic parameters.

Key Features:

  • SQL Injection Prevention: Parameters are safely bound, preventing SQL injection vulnerabilities
  • Automatic Type Inference: Java types are automatically mapped to Arrow types (e.g., doubleFloat64, StringUtf8)
  • Explicit Type Control: Use the new Param class with typed factory methods (Param.int64(), Param.string(), Param.decimal128(), etc.) for precise control over Arrow types
  • Updated Dependencies: Apache Arrow Flight SQL upgraded to 18.3.0, plus new ADBC driver support

Example:

import ai.spice.SpiceClient;
import ai.spice.Param;

try (SpiceClient client = new SpiceClient()) {
// With automatic type inference
ArrowReader reader = client.queryWithParams(
"SELECT * FROM taxi_trips WHERE trip_distance > $1 LIMIT 10",
5.0);

// With explicit typed parameters for precise control
ArrowReader reader = client.queryWithParams(
"SELECT * FROM orders WHERE order_id = $1 AND amount >= $2",
Param.int64(12345),
Param.decimal128(new BigDecimal("99.99"), 10, 2));
}

Maven:

<dependency>
<groupId>ai.spice</groupId>
<artifactId>spiceai</artifactId>
<version>0.5.0</version>
</dependency>

For more details, refer to the Spice Java SDK Repository.

Google LLM Support

Expanded AI Provider Support: Spice now supports Google embedding and chat models via the Google AI provider, expanding the available LLM options for AI inference workloads alongside existing providers like OpenAI, Anthropic, and AWS Bedrock.

Key Features:

  • Google Chat Models: Access Google's Gemini models for chat completions
  • Google Embeddings: Generate embeddings using Google's text embedding models
  • Unified API: Use the same OpenAI-compatible API endpoints for all LLM providers

Example spicepod.yaml configuration:

models:
- from: google:gemini-2.0-flash
name: gemini
params:
google_api_key: ${secrets:GOOGLE_API_KEY}

embeddings:
- from: google:text-embedding-004
name: google_embeddings
params:
google_api_key: ${secrets:GOOGLE_API_KEY}

For more details, refer to the Google LLM Documentation (see docs PR #1286).

URL Tables

Query data sources directly via URL in SQL without prior dataset registration. Supports S3, Azure Blob Storage, and HTTP/HTTPS URLs with automatic format detection and partition inference.

Supported Patterns:

  • Single files: SELECT * FROM 's3://bucket/data.parquet'
  • Directories/prefixes: SELECT * FROM 's3://bucket/data/'
  • Glob patterns: SELECT * FROM 's3://bucket/year=*/month=*/data.parquet'

Key Features:

  • Automatic file format detection (Parquet, CSV, JSON, etc.)
  • Hive-style partition inference with filter pushdown
  • Schema inference from files
  • Works with both SQL and DataFrame APIs

Example with hive partitioning:

-- Partitions are automatically inferred from paths
SELECT * FROM 's3://bucket/data/' WHERE year = '2024' AND month = '01'

Enable via spicepod.yml:

runtime:
params:
url_tables: enabled

Cluster Mode Async Query APIs (experimental)

New asynchronous query APIs for long-running queries in cluster mode:

  • /v1/queries endpoint: Submit queries and retrieve results asynchronously

OpenTelemetry Improvements

Unified Telemetry Endpoint: OTel metrics ingestion has been consolidated to the Flight port (50051), simplifying deployment by removing the separate OTel port (50052). The push-based metrics exporter continues to support integration with OpenTelemetry collectors.

Note: This is a breaking change. Update your configurations if you were using the dedicated OTel port 50052. Internal cluster communication now uses port 50052 exclusively.

Observability Improvements

Enhanced Dashboards: Updated Grafana and Datadog example dashboards with:

  • Snapshot monitoring widgets
  • Improved accelerated datasets section
  • Renamed ingestion lag charts for clarity

Additional Histogram Buckets: Added more buckets to histogram metrics for better latency distribution visibility.

For more details, refer to the Monitoring Documentation.

Hash Indexing for Arrow Acceleration (experimental)

Arrow-based accelerations now support hash indexing for faster point lookups on equality predicates. Hash indexes provide O(1) average-case lookup performance for columns with high cardinality.

Features:

  • Primary key hash index support
  • Secondary index support for non-primary key columns
  • Composite key support with proper null value handling

Example configuration:

datasets:
- from: postgres:users
name: users
acceleration:
enabled: true
engine: arrow
primary_key: user_id
indexes:
'(tenant_id, user_id)': unique # Composite hash index

For more details, refer to the Hash Index Documentation.

SMB and NFS Data Connectors

Network-Attached Storage Connectors: New data connectors for SMB (Server Message Block) and NFS (Network File System) protocols enable direct federated queries against network-attached storage without requiring data movement to cloud object stores.

Key Features:

  • SMB Protocol Support: Connect to Windows file shares and Samba servers with authentication support
  • NFS Protocol Support: Connect to Unix/Linux NFS exports for direct data access
  • Federated Queries: Query Parquet, CSV, JSON, and other file formats directly from network storage with full SQL support
  • Acceleration Support: Accelerate data from SMB/NFS sources using DuckDB, Spice Cayenne, or other accelerators

Example spicepod.yaml configuration:

datasets:
# SMB share
- from: smb://fileserver/share/data.parquet
name: smb_data
params:
smb_username: ${secrets:SMB_USER}
smb_password: ${secrets:SMB_PASS}

# NFS export
- from: nfs://nfsserver/export/data.parquet
name: nfs_data

For more details, refer to the Data Connectors Documentation.

ScyllaDB Data Connector

A new data connector for ScyllaDB, the high-performance NoSQL database compatible with Apache Cassandra. Query ScyllaDB tables directly or accelerate them for faster analytics.

Example configuration:

datasets:
- from: scylladb:my_keyspace.my_table
name: scylla_data
acceleration:
enabled: true
engine: duckdb

For more details, refer to the ScyllaDB Data Connector Documentation.

Flight SQL TLS Connection Fixes

TLS Connection Support: Fixed TLS connection issues when using grpc+tls:// scheme with Flight SQL endpoints. Added support for custom CA certificate files via the new flightsql_tls_ca_certificate_file parameter.

Developer Experience Improvements

  • Turso v0.3.2 Upgrade: Upgraded Turso accelerator for improved performance and reliability
  • Rust 1.91 Upgrade: Updated to Rust 1.91 for latest language features and performance improvements
  • Spice Cloud CLI: Added spice cloud CLI commands for cloud deployment management
  • Improved Spicepod Schema: Improved JSON schema generation for better IDE support and validation
  • Acceleration Snapshots: Added configurable snapshots_create_interval for periodic acceleration snapshots independent of refresh cycles
  • Tiered Caching with Localpod: The Localpod connector now supports caching refresh mode, enabling multi-layer acceleration where a persistent cache feeds a fast in-memory cache
  • GitHub Data Connector: Added workflows and workflow runs support for GitHub repositories
  • NDJSON/LDJSON Support: Added support for Newline Delimited JSON and Line Delimited JSON file formats

Additional Improvements & Bug Fixes

  • Model Listing: New functionality to list available models across multiple AI providers
  • DuckDB Partitioned Tables: Primary key constraints now supported in partitioned DuckDB table mode
  • Post-refresh Sorting: New on_refresh_sort_columns parameter for DuckDB enables data ordering after writes
  • Improved Install Scripts: Removed jq dependency and improved cross-platform compatibility
  • Better Error Messages: Improved error messaging for bucket UDF arguments and deprecated OpenAI parameters
  • Reliability: Fixed DynamoDB IAM role authentication with new dynamodb_auth: iam_role parameter
  • Reliability: Fixed cluster executors to use scheduler's temp_directory parameter for shuffle files
  • Reliability: Initialize secrets before object stores in cluster executor mode
  • Reliability: Added page-level retry with backoff for transient GitHub GraphQL errors
  • Performance: Improved statistics for rewritten DistributeFileScanOptimizer plans
  • Developer Experience: Added max_message_size configuration for Flight service

Contributors

Breaking Changes

OTel Ingestion Port Change

OTel ingestion has been moved to the Flight port (50051), removing the separate OTel port 50052. Port 50052 is now used exclusively for internal cluster communication. Update your configurations if you were using the dedicated OTel port.

Distributed Query Cluster Mode Requires mTLS

Distributed query cluster mode now requires mTLS for secure communication between cluster nodes. This is a security enhancement to prevent unauthorized nodes from joining the cluster and accessing secrets.

Migration Steps:

  1. Generate certificates using spice cluster tls init and spice cluster tls add
  2. Update scheduler and executor startup commands with --node-mtls-* arguments
  3. For development/testing, use --allow-insecure-connections to opt out of mTLS

Renamed CLI Arguments:

Old NameNew Name
--cluster-mode--role
--cluster-ca-certificate-file--node-mtls-ca-certificate-file
--cluster-certificate-file--node-mtls-certificate-file
--cluster-key-file--node-mtls-key-file
--cluster-address--node-bind-address
--cluster-advertise-address--node-advertise-address
--cluster-scheduler-url--scheduler-address

Removed CLI Arguments:

  • --cluster-api-key: Replaced by mTLS authentication

Cookbook Updates

New ScyllaDB Data Connector Recipe: New recipe demonstrating how to use the ScyllaDB Data Connector. See ScyllaDB Data Connector Recipe for details.

New SMB Data Connector Recipe: New recipe demonstrating how to use the SMB Data Connector. See SMB Data Connector Recipe for details.

The Spice Cookbook includes 86 recipes to help you get started with Spice quickly and easily.

Upgrading

To upgrade to v1.11.0, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.11.0 image:

docker pull spiceai/spiceai:1.11.0

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai --version 1.11.0

AWS Marketplace:

Spice is available in the AWS Marketplace.

Dependencies

What's Changed

Changelog

Spice v1.11.0-rc.2 (Jan 22, 2026)

· 24 min read
Viktor Yershov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.11.0-rc.2!

v1.11.0-rc.2 is the second release candidate for advanced test of v1.11. It brings Spice Cayenne to Beta status with acceleration snapshots support, a new ScyllaDB Data Connector, upgrades to DataFusion v51, Arrow 57.2, and iceberg-rust v0.8.0. It includes significant improvements to distributed query, caching, and observability.

What's New in v1.11.0-rc.2

Spice Cayenne Accelerator Reaches Beta

Spice Cayenne has been promoted to Beta status with acceleration snapshots support and numerous stability improvements.

Improved Reliability:

  • Fixed timezone database issues in Docker images that caused acceleration panics
  • Resolved FuturesUnordered reentrant drop crashes
  • Fixed memory growth issues related to Vortex metrics allocation
  • Metadata catalog now properly respects cayenne_file_path location
  • Added warnings for unparseable configuration values

Example configuration with snapshots:

datasets:
- from: s3://my-bucket/data.parquet
name: my_dataset
acceleration:
enabled: true
engine: cayenne
mode: file

DataFusion v51 Upgrade

Apache DataFusion has been upgraded to v51, bringing significant performance improvements, new SQL features, and enhanced observability.

DataFusion v51 ClickBench Performance

Performance Improvements:

  • Faster CASE Expression Evaluation: Expressions now short-circuit earlier, reuse partial results, and avoid unnecessary scattering, speeding up common ETL patterns
  • Better Defaults for Remote Parquet Reads: DataFusion now fetches the last 512KB of Parquet files by default, typically avoiding 2 I/O requests per file
  • Faster Parquet Metadata Parsing: Leverages Arrow 57's new thrift metadata parser for up to 4x faster metadata parsing

New SQL Features:

  • SQL Pipe Operators: Support for |> syntax for inline transforms
  • DESCRIBE <query>: Returns the schema of any query without executing it
  • Named Arguments in SQL Functions: PostgreSQL-style param => value syntax for scalar, aggregate, and window functions
  • Decimal32/Decimal64 Support: New Arrow types supported including aggregations like SUM, AVG, and MIN/MAX

Example pipe operator:

SELECT * FROM t
|> WHERE a > 10
|> ORDER BY b
|> LIMIT 5;

Improved Observability:

  • Improved EXPLAIN ANALYZE Metrics: New metrics including output_bytes, selectivity for filters, reduction_factor for aggregates, and detailed timing breakdowns

Arrow 57.2 Upgrade

Spice has been upgraded to Apache Arrow Rust 57.2.0, bringing major performance improvements and new capabilities.

Arrow 57 Parquet Metadata Parsing Performance

Key Features:

  • 4x Faster Parquet Metadata Parsing: A rewritten thrift metadata parser delivers up to 4x faster metadata parsing, especially beneficial for low-latency use cases and files with large amounts of metadata
  • Parquet Variant Support: Experimental support for reading and writing the new Parquet Variant type for semi-structured data, including shredded variant values
  • Parquet Geometry Support: Read and write support for Parquet Geometry types (GEOMETRY and GEOGRAPHY) with GeospatialStatistics
  • New arrow-avro Crate: Efficient conversion between Apache Avro and Arrow RecordBatches with projection pushdown and vectorized execution support

iceberg-rust v0.8.0 Upgrade

Spice has been upgraded to iceberg-rust v0.8.0, bringing improved Iceberg table support.

Key Features:

  • V3 Metadata Support: Full support for Iceberg V3 table metadata format
  • INSERT INTO Partitioned Tables: DataFusion integration now supports inserting data into partitioned Iceberg tables
  • Improved Delete File Handling: Better support for position and equality delete files, including shared delete file loading and caching
  • SQL Catalog Updates: Implement update_table and register_table for SQL catalog
  • S3 Tables Catalog: Implement update_table for S3 Tables catalog
  • Enhanced Arrow Integration: Convert Arrow schema to Iceberg schema with auto-assigned field IDs, _file column support, and Date32 type support

Acceleration Snapshots

Acceleration snapshots enable point-in-time recovery and data versioning for accelerated datasets. Snapshots capture the state of accelerated data at specific points, allowing for fast bootstrap recovery and rollback capabilities.

Key Feature Improvements in v1.11:

  • Flexible Triggers: Configure when snapshots are created based on time intervals or stream batch counts
  • Automatic Compaction: Reduce storage overhead by compacting older snapshots (DuckDB only)
  • Bootstrap Integration: Snapshots can reset cache expiry on load for seamless recovery (DuckDB with Caching refresh mode)
  • Smart Creation Policies: Only create snapshots when data has actually changed

Example configuration:

datasets:
- from: s3://my-bucket/data.parquet
name: my_dataset
acceleration:
enabled: true
engine: cayenne
mode: file
snapshots: enabled
snapshots_trigger: time_interval
snapshots_trigger_threshold: 1h
snapshots_creation_policy: on_changed

Snapshots API and CLI: New API endpoints and CLI commands for managing snapshots programmatically. List, create, and restore snapshots directly from the command line or via HTTP.

For more details, refer to the Acceleration Snapshots Documentation.

ScyllaDB Data Connector

A new data connector for ScyllaDB, the high-performance NoSQL database compatible with Apache Cassandra. Query ScyllaDB tables directly or accelerate them for faster analytics.

Example configuration:

datasets:
- from: scylladb:my_keyspace.my_table
name: scylla_data
acceleration:
enabled: true
engine: duckdb

For more details, refer to the ScyllaDB Data Connector Documentation.

Distributed Query Improvements

mTLS Verification: Cluster communication between scheduler and executors now supports mutual TLS verification for enhanced security.

Credential Propagation: Azure and GCS credentials are now automatically propagated to executors in cluster mode, enabling access to cloud storage across the distributed query cluster.

Improved Resilience:

  • Exponential backoff for scheduler disconnection recovery
  • Increased gRPC message size limit from 16MB to 100MB for large query plans
  • HTTP health endpoint for cluster executors
  • Automatic executor role inference when --scheduler-address is provided

For more details, refer to the Distributed Query Documentation.

Caching Acceleration Mode Improvements

The Caching Acceleration Mode introduced in v1.10.0 has received significant performance optimizations and reliability fixes in this release.

Performance Optimizations:

  • Non-blocking Cache Writes: Cache misses no longer block query responses. Data is written to the cache asynchronously after the query returns, reducing query latency for cache miss scenarios.
  • Batch Cache Writes: Multiple cache entries are now written in batches rather than individually, significantly improving write throughput for high-volume cache operations.

Reliability Fixes:

  • Correct SWR Refresh Behavior: The stale-while-revalidate (SWR) pattern now correctly refreshes only the specific entries that were accessed instead of refreshing all stale rows in the dataset. This prevents unnecessary source queries and reduces load on upstream data sources.
  • Deduplicated Refresh Requests: Fixed an issue where JSON array responses could trigger multiple redundant refresh operations. Refresh requests are now properly deduplicated.
  • Fixed Cache Hit Detection: Resolved an issue where queries that didn't include fetched_at in their projection would always result in cache misses, even when cached data was available.
  • Unfiltered Query Optimization: SELECT * queries without filters now return cached data directly without unnecessary filtering overhead.

For more details, refer to the Caching Acceleration Mode Documentation.

DynamoDB Connector Enhancements

  • Added JSON nesting for DynamoDB Streams
  • Proper batch deletion handling

URL Tables

Query data sources directly via URL in SQL without prior dataset registration. Supports S3, Azure Blob Storage, and HTTP/HTTPS URLs with automatic format detection and partition inference.

Supported Patterns:

  • Single files: SELECT * FROM 's3://bucket/data.parquet'
  • Directories/prefixes: SELECT * FROM 's3://bucket/data/'
  • Glob patterns: SELECT * FROM 's3://bucket/year=*/month=*/data.parquet'

Key Features:

  • Automatic file format detection (Parquet, CSV, JSON, etc.)
  • Hive-style partition inference with filter pushdown
  • Schema inference from files
  • Works with both SQL and DataFrame APIs

Example with hive partitioning:

-- Partitions are automatically inferred from paths
SELECT * FROM 's3://bucket/data/' WHERE year = '2024' AND month = '01'

Enable via spicepod.yml:

runtime:
params:
url_tables: enabled

Cluster Mode Async Query APIs (experimental)

New asynchronous query APIs for long-running queries in cluster mode:

  • /v1/queries endpoint: Submit queries and retrieve results asynchronously
  • Arrow Flight async support: Non-blocking query execution via Arrow Flight protocol

Observability Improvements

Enhanced Dashboards: Updated Grafana and Datadog example dashboards with:

  • Snapshot monitoring widgets
  • Improved accelerated datasets section
  • Renamed ingestion lag charts for clarity

Additional Histogram Buckets: Added more buckets to histogram metrics for better latency distribution visibility.

For more details, refer to the Monitoring Documentation.

Additional Improvements

  • Model Listing: New functionality to list available models across multiple AI providers
  • DuckDB Partitioned Tables: Primary key constraints now supported in partitioned DuckDB table mode
  • Post-refresh Sorting: New on_refresh_sort_columns parameter for DuckDB enables data ordering after writes
  • Improved Install Scripts: Removed jq dependency and improved cross-platform compatibility
  • Better Error Messages: Improved error messaging for bucket UDF arguments and deprecated OpenAI parameters

Contributors

Breaking Changes

No breaking changes.

Cookbook Updates

New ScyllaDB Data Connector Recipe: New recipe demonstrating how to use ScyllaDB Data Connector. See ScyllaDB Data Connector Recipe for details.

New SMB Data Connector Recipe: New recipe demonstrating how to use ScyllaDB Data Connector. See SMB Data Connector Recipe for details.

The Spice Cookbook includes 86 recipes to help you get started with Spice quickly and easily.

Upgrading

To upgrade to v1.11.0-rc.2, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:v1.11.0-rc.2 image:

docker pull spiceai/spiceai:v1.11.0-rc.2

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

Spice is available in the AWS Marketplace.

Dependencies

Changelog

Spice v1.11.0-rc.1 (Jan 6, 2026)

· 17 min read
Evgenii Khramkov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.11.0-rc.1!

v1.11.0-rc.1 is the first release candidate for early testing of v1.11 features including Distributed Query with mTLS for enterprise-grade secure cluster communication, new SMB and NFS Data Connectors for direct network-attached storage access, Prepared Statements for improved query performance and security, Cayenne Accelerator Enhancements with Key-based deletion vectors and Amazon S3 Express One Zone support, Google LLM Support for expanded AI inference capabilities, and Spice Java SDK v0.5.0 with parameterized query support.

What's New in v1.11.0-rc.1

Distributed Query with mTLS

Enterprise-Grade Secure Cluster Communication: Distributed query cluster mode now enables mutual TLS (mTLS) by default for secure communication between schedulers and executors. Internal cluster communication includes highly privileged RPC calls like fetching Spicepod configuration and expanding secrets. mTLS ensures only authenticated nodes can join the cluster and access sensitive data.

Key Features:

  • Mutual TLS Authentication: All executor-to-scheduler and executor-to-executor gRPC connections on the internal cluster port (50052) are secured with mTLS, securing communication, and preventing unauthorized nodes from joining the cluster
  • Certificate Management CLI: New developer spice cluster tls init and spice cluster tls add commands for generating CA certificates and node certificates with proper SANs (Subject Alternative Names)
  • Simplified CLI Arguments: Renamed cluster arguments for clarity (--role, --scheduler-address, --node-mtls-*) with --scheduler-address implying --role executor
  • Port Separation: Public services (Flight queries, HTTP API, Prometheus metrics) remain on ports 50051, 8090, and 9090 respectively, while internal cluster services (SchedulerGrpcServer, ClusterService) are isolated on port 50052 with mTLS enforced
  • Development Mode: Use --allow-insecure-connections flag to disable mTLS requirement for local development and testing

Quick Start:

# Generate certificates for development
spice cluster tls init
spice cluster tls add scheduler1
spice cluster tls add executor1

# Start scheduler
spiced --role scheduler \
--node-mtls-ca-certificate-file ca.crt \
--node-mtls-certificate-file scheduler1.crt \
--node-mtls-key-file scheduler1.key

# Start executor
spiced --role executor \
--scheduler-address https://scheduler1:50052 \
--node-mtls-ca-certificate-file ca.crt \
--node-mtls-certificate-file executor1.crt \
--node-mtls-key-file executor1.key

For more details, refer to the Distributed Query Documentation.

SMB and NFS Data Connectors

Network-Attached Storage Connectors: New data connectors for SMB (Server Message Block) and NFS (Network File System) protocols enable direct federated queries against network-attached storage without requiring data movement to cloud object stores.

Key Features:

  • SMB Protocol Support: Connect to Windows file shares and Samba servers with authentication support
  • NFS Protocol Support: Connect to Unix/Linux NFS exports for direct data access
  • Federated Queries: Query Parquet, CSV, JSON, and other file formats directly from network storage with full SQL support
  • Acceleration Support: Accelerate data from SMB/NFS sources using DuckDB, Spice Cayenne, or other accelerators

Example spicepod.yaml configuration:

datasets:
# SMB share
- from: smb://fileserver/share/data.parquet
name: smb_data
params:
smb_username: ${secrets:SMB_USER}
smb_password: ${secrets:SMB_PASS}

# NFS export
- from: nfs://nfsserver/export/data.parquet
name: nfs_data

For more details, refer to the Data Connectors Documentation.

Prepared Statements

Improved Query Performance and Security: Spice now supports prepared statements, enabling parameterized queries that improve both performance through query plan caching and security by preventing SQL injection attacks.

Key Features:

  • Query Plan Caching: Prepared statements cache query plans, reducing planning overhead for repeated queries
  • SQL Injection Prevention: Parameters are safely bound, preventing SQL injection vulnerabilities
  • Arrow Flight SQL Support: Full prepared statement support via Arrow Flight SQL protocol

SDK Support:

SDKSupportMin VersionMethod
gospice (Go)✅ Fullv8.0.0+SqlWithParams() with typed constructors (Int32Param, StringParam, TimestampParam, etc.)
spice-rs (Rust)✅ Fullv3.0.0+query_with_params() with RecordBatch parameters
spice-dotnet (.NET)❌ Not yet-Coming soon
spice-java (Java)✅ Fullv0.5.0+queryWithParams() with typed Param constructors (Param.int64(), Param.string(), etc.)
spice.js (JavaScript)❌ Not yet-Coming soon
spicepy (Python)❌ Not yet-Coming soon

Example (Go):

import "github.com/spiceai/gospice/v8"

client, _ := spice.NewClient()
defer client.Close()

// Parameterized query with typed parameters
results, _ := client.SqlWithParams(ctx,
"SELECT * FROM products WHERE price > $1 AND category = $2",
spice.Float64Param(10.0),
spice.StringParam("electronics"),
)

Example (Java):

import ai.spice.SpiceClient;
import ai.spice.Param;
import org.apache.arrow.adbc.core.ArrowReader;

try (SpiceClient client = new SpiceClient()) {
// With automatic type inference
ArrowReader reader = client.queryWithParams(
"SELECT * FROM products WHERE price > $1 AND category = $2",
10.0, "electronics");

// With explicit typed parameters
ArrowReader reader = client.queryWithParams(
"SELECT * FROM products WHERE price > $1 AND category = $2",
Param.float64(10.0),
Param.string("electronics"));
}

For more details, refer to the Parameterized Queries Documentation.

Spice Cayenne Accelerator Enhancements

The Spice Cayenne data accelerator has been improved with several key enhancements:

  • KeyBased Deletion Vectors: Improved deletion vector support using key-based lookups for more efficient data management and faster delete operations. KeyBased deletion vectors are more memory-efficient than positional vectors for sparse deletions.
  • S3 Express One Zone Support: Store Cayenne data files in S3 Express One Zone for single-digit millisecond latency, ideal for latency-sensitive query workloads that require persistence.

Example spicepod.yaml configuration:

datasets:
- from: s3://my-bucket/data.parquet
name: fast_data
acceleration:
enabled: true
engine: cayenne
mode: file
params:
# Use S3 Express One Zone for data files
cayenne_s3express_bucket: my-express-bucket--usw2-az1--x-s3

For more details, refer to the Cayenne Documentation.

Google LLM Support

Expanded AI Provider Support: Spice now supports Google embedding and chat models via the Google AI provider, expanding the available LLM options for AI inference workloads alongside existing providers like OpenAI, Anthropic, and AWS Bedrock.

Key Features:

  • Google Chat Models: Access Google's Gemini models for chat completions
  • Google Embeddings: Generate embeddings using Google's text embedding models
  • Unified API: Use the same OpenAI-compatible API endpoints for all LLM providers

Example spicepod.yaml configuration:

models:
- from: google:gemini-2.0-flash
name: gemini
params:
google_api_key: ${secrets:GOOGLE_API_KEY}

embeddings:
- from: google:text-embedding-004
name: google_embeddings
params:
google_api_key: ${secrets:GOOGLE_API_KEY}

For more details, refer to the Google LLM Documentation (see docs PR #1286).

Spice Java SDK v0.5.0

Parameterized Query Support for Java: The Spice Java SDK v0.5.0 introduces parameterized queries using ADBC (Arrow Database Connectivity), providing a safer and more efficient way to execute queries with dynamic parameters.

Key Features:

  • SQL Injection Prevention: Parameters are safely bound, preventing SQL injection vulnerabilities
  • Automatic Type Inference: Java types are automatically mapped to Arrow types (e.g., doubleFloat64, StringUtf8)
  • Explicit Type Control: Use the new Param class with typed factory methods (Param.int64(), Param.string(), Param.decimal128(), etc.) for precise control over Arrow types
  • Updated Dependencies: Apache Arrow Flight SQL upgraded to 18.3.0, plus new ADBC driver support

Example:

import ai.spice.SpiceClient;
import ai.spice.Param;

try (SpiceClient client = new SpiceClient()) {
// With automatic type inference
ArrowReader reader = client.queryWithParams(
"SELECT * FROM taxi_trips WHERE trip_distance > $1 LIMIT 10",
5.0);

// With explicit typed parameters for precise control
ArrowReader reader = client.queryWithParams(
"SELECT * FROM orders WHERE order_id = $1 AND amount >= $2",
Param.int64(12345),
Param.decimal128(new BigDecimal("99.99"), 10, 2));
}

Maven:

<dependency>
<groupId>ai.spice</groupId>
<artifactId>spiceai</artifactId>
<version>0.5.0</version>
</dependency>

For more details, refer to the Spice Java SDK Repository.

OpenTelemetry Improvements

Unified Telemetry Endpoint: OTel metrics ingestion has been consolidated to the Flight port (50051), simplifying deployment by removing the separate OTel port (50052). The push-based metrics exporter continues to support integration with OpenTelemetry collectors.

Note: This is a breaking change. Update your configurations if you were using the dedicated OTel port 50052. Internal cluster communication now uses port 50052 exclusively.

Developer Experience Improvements

  • Turso v0.3.2 Upgrade: Upgraded Turso accelerator for improved performance and reliability
  • Rust 1.91 Upgrade: Updated to Rust 1.91 for latest language features and performance improvements
  • Spice Cloud CLI: Added spice cloud CLI commands for cloud deployment management
  • Improved Spicepod Schema: Enhanced JSON schema generation for better IDE support and validation
  • Acceleration Snapshots: Added configurable snapshots_create_interval for periodic acceleration snapshots independent of refresh cycles
  • Tiered Caching with Localpod: The Localpod connector now supports caching refresh mode, enabling multi-layer acceleration where a persistent cache feeds a fast in-memory cache
  • GitHub Data Connector: Added workflows and workflow runs support for GitHub repositories
  • NDJSON/LDJSON Support: Added support for Newline Delimited JSON and Line Delimited JSON file formats

Additional Improvements & Bug Fixes

  • Reliability: Fixed DynamoDB IAM role authentication with new dynamodb_auth: iam_role parameter
  • Reliability: Fixed cluster executors to use scheduler's temp_directory parameter for shuffle files
  • Reliability: Initialize secrets before object stores in cluster executor mode
  • Reliability: Added page-level retry with backoff for transient GitHub GraphQL errors
  • Performance: Improved statistics for rewritten DistributeFileScanOptimizer plans
  • Developer Experience: Added max_message_size configuration for Flight service

Contributors

Breaking Changes

OTel Ingestion Port Change

OTel ingestion has been moved to the Flight port (50051), removing the separate OTel port 50052. Port 50052 is now used exclusively for internal cluster communication. Update your configurations if you were using the dedicated OTel port.

Distributed Query Cluster Mode Requires mTLS

Distributed query cluster mode now requires mTLS for secure communication between cluster nodes. This is a security enhancement to prevent unauthorized nodes from joining the cluster and accessing secrets.

Migration Steps:

  1. Generate certificates using spice cluster tls init and spice cluster tls add
  2. Update scheduler and executor startup commands with --node-mtls-* arguments
  3. For development/testing, use --allow-insecure-connections to opt out of mTLS

Renamed CLI Arguments:

Old NameNew Name
--cluster-mode--role
--cluster-ca-certificate-file--node-mtls-ca-certificate-file
--cluster-certificate-file--node-mtls-certificate-file
--cluster-key-file--node-mtls-key-file
--cluster-address--node-bind-address
--cluster-advertise-address--node-advertise-address
--cluster-scheduler-url--scheduler-address

Removed CLI Arguments:

  • --cluster-api-key: Replaced by mTLS authentication

Cookbook Updates

No major cookbook updates.

The Spice Cookbook includes 84 recipes to help you get started with Spice quickly and easily.

Upgrading

To try v1.11.0-rc.1, use one of the following methods:

CLI:

spice upgrade --version 1.11.0-rc.1

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.11.0-rc.1 image:

docker pull spiceai/spiceai:1.11.0-rc.1

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai --version 1.11.0-rc.1

AWS Marketplace:

🎉 Spice is available in the AWS Marketplace!

What's Changed

Changelog

Spice v1.10.0 (Dec 9, 2025)

· 18 min read
William Croxson
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.10.0! ⚡

Spice v1.10.0 introduces a new Caching Acceleration Mode with stale-while-revalidate (SWR) semantics for disk-persisted, low-latency queries with background refresh. This release also adds the TinyLFU eviction policy for the SQL results cache, a preview of the DynamoDB Streams connector for real-time CDC, S3 location predicate pruning for faster partitioned queries, improved distributed query execution, and multiple security hardening improvements.

What's New in v1.10.0

Caching Acceleration Mode

Low-Latency Queries with Background Refresh: This release introduces a new caching acceleration mode that implements the stale-while-revalidate (SWR) pattern. Queries return cached results immediately while data refreshes asynchronously in the background, eliminating query latency spikes during refresh cycles. Cached data persists to disk using DuckDB, SQLite, or Cayenne file modes.

Key Features:

  • Stale-While-Revalidate (SWR): Returns cached data immediately while refreshing in the background, reducing query latency
  • Disk Persistence: Cached results persist across restarts using DuckDB, SQLite, or Cayenne file modes
  • Configurable Refresh: Control refresh intervals with refresh_check_interval to balance freshness and source load

Recommendation: Use retention configuration with caching acceleration to ensure stale data is cleaned up over time.

Example spicepod.yaml configuration:

datasets:
- from: http://localhost:7400
name: cached_data
time_column: fetched_at
acceleration:
enabled: true
engine: duckdb
mode: file # Persist cache to disk
refresh_mode: caching
refresh_check_interval: 10m
retention_check_enabled: true
retention_period: 24h
retention_check_interval: 1h

For more details, refer to the Data Acceleration Documentation.

TinyLFU Cache Eviction Policy

Higher Cache Hit Rates for SQL Results Cache: A new TinyLFU cache eviction policy is now available for the SQL results cache. TinyLFU is a probabilistic cache admission policy that maintains higher hit rates than LRU while keeping memory usage predictable, making it ideal for workloads with varying query frequency patterns.

Example spicepod.yaml configuration:

runtime:
caching:
sql_results:
enabled: true
eviction_policy: tiny_lfu # default: lru

For more details, refer to the Caching Documentation and the Moka TinyLFU Documentation for details of the algorithm.

DynamoDB Streams Data Connector (Preview)

Real-Time Change Data Capture for DynamoDB: The DynamoDB connector now integrates with DynamoDB Streams for real-time change data capture (CDC). This enables continuous synchronization of DynamoDB table changes into Spice for real-time query, search, and LLM-inference.

Key Features:

  • Real-Time CDC: Automatically captures inserts, updates, and deletes from DynamoDB tables as they occur
  • Table Bootstrapping: Performs an initial full table scan before streaming changes, ensuring complete data consistency
  • Acceleration Integration: Works with refresh_mode: changes to incrementally update accelerated datasets

Note: DynamoDB Streams must be enabled on your DynamoDB table. This feature is in preview.

Example spicepod.yaml configuration:

datasets:
- from: dynamodb:my_table
name: orders_stream
acceleration:
enabled: true
refresh_mode: changes # Enable Streams capture

For more details, refer to the DynamoDB Connector Documentation.

OpenTelemetry Metrics Exporter

Spice can now push metrics to an OpenTelemetry collector, enabling integration with platforms such as Jaeger, New Relic, Honeycomb, and other OpenTelemetry-compatible backends.

Key Features:

  • Protocol Support: Supports the gRPC (default port 4317) protocol
  • Configurable Push Interval: Control how frequently metrics are pushed to the collector

Example spicepod.yaml configuration for gRPC:

runtime:
telemetry:
enabled: true
otel_exporter:
endpoint: 'localhost:4317'
push_interval: '30s'

For more details, refer to the Observability & Monitoring Documentation.

S3 Connector Improvements

S3 Location Predicate Pruning: The S3 data connector now supports location-based predicate pruning, dramatically reducing data scanned by pushing down location filter predicates to S3 listing operations. For partitioned datasets (e.g., year=2025/month=12/), Spice now skips listing irrelevant partitions entirely, significantly reducing query latency and S3 API costs.

AWS S3 Tables Write Support: Full read/write capability for AWS S3 Tables, enabling direct integration with AWS's managed table format for S3. Use standard SQL INSERT INTO to write data.

For more details, refer to the S3 Data Connector Documentation and Glue Data Connector Documentation.

Faster Distributed Query Execution

Distributed query planning and execution have been significantly improved:

  • Fixed executor registration in cluster mode for more reliable distributed deployments
  • Improved hostname resolution for Flight server binding, enabling better executor discovery
  • Distributed accelerator registration: Data accelerators now properly register in distributed mode
  • Optimized query planning: DistributeFileScanOptimizer improvements for faster planning with large datasets

For more details, refer to the Distributed Query Documentation.

Search Improvements

Search capabilities have been improved with several performance and reliability enhancements:

  • Fixed FTS query blocking: Full-text search queries no longer block unnecessarily, improving query responsiveness
  • Optimized vector index operations: Eliminated unnecessary list_vectors calls for better performance
  • Improved limit pushdown: IndexerExec now properly handles limit pushdown for more efficient searches

For more details, refer to the Search Documentation.

Security Hardening

Multiple security improvements have been implemented:

  • SQL Identifier Quoting: Hardened SQL identifier quoting across all database connectors (PostgreSQL, MySQL, DuckDB, etc.) to prevent SQL injection attacks through table or column names
  • Token Redaction: Sensitive authentication tokens are now fully redacted in debug and error output, preventing accidental credential exposure in logs
  • Path Traversal Prevention: Fixed tar extraction operations to prevent directory traversal vulnerabilities when processing archived files
  • Input Sanitization: Added strict validation for top_n_sample order_by clause parsing to prevent injection attacks
  • Glue Credential Handling: Prevented automatic loading of AWS credentials from environment in Glue connector, ensuring explicit credential configuration

Developer Experience Improvements

  • Health probe metrics: Added health probe latency metrics for better observability
  • CLI improvements: Fixed .clear history command in the REPL to fully clear persisted history

Contributors

Breaking Changes

No breaking changes.

Cookbook Updates

No major cookbook updates.

The Spice Cookbook includes 82 recipes to help you get started with Spice quickly and easily.

Upgrading

To upgrade to v1.10.0, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.10.0 image:

docker pull spiceai/spiceai:1.10.0

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

🎉 Spice is now available in the AWS Marketplace!

What's Changed

Changelog

Spice v1.10.0-rc.1 (Dec 2, 2025)

· 11 min read
David Stancu
Principal Software Engineer at Spice AI

Announcing the release of Spice v1.10.0-rc.1! ⚡

v1.10.0-rc1 is a release candidate for early testing of v1.10 features including an all new caching acceleration mode, tiny_lfu caching policy, a new DynamoDB Streams connector (Preview), improvements to the DynamoDB connector, faster distributed query execution, S3 connector improvements, and security hardening for v1.10.0-stable.

What's New in v1.10.0-rc1

Caching Acceleration Mode with SWR and TinyLFU

This release introduces a new caching acceleration mode that implements the stale-while-revalidate (SWR) pattern using Data Accelerators such as DuckDB or Cayenne, enabling queries to return file-persisted cached results immediately while asynchronously refreshing data in the background. Combined with the new TinyLFU cache eviction policy, Spice can now maintain higher cache hit rates while keeping memory usage predictable.

Key Features:

  • Stale-While-Revalidate (SWR): Returns cached data immediately while refreshing in the background
  • Data Accelerator Support: Cached accelerators can persist data to disk using DuckDB, SQLite, or Cayenne file modes.
  • TinyLFU Cache Policy: Probabilistic cache admission policy that maintains high hit rates with minimal overhead
  • Predictable Memory Usage: Configurable memory limits with automatic eviction of less frequently used entries

Example Spicepod.yml configuration:

runtime:
caching:
sql_results:
enabled: true
eviction_policy: tiny_lfu # default lru

datasets:
- from: s3://my-bucket/data.parquet
name: cached_data
acceleration:
enabled: true
engine: duckdb
mode: file # Persist cache to disk
refresh_mode: caching
refresh_check_interval: 10m

For more details, refer to the Data Acceleration Documentation and Caching Documentation.

DynamoDB Streams Data Connector in Preview

DynamoDB Connector now integrates with DynamoDB Streams which enables real-time streaming with support for both table bootstrapping and continuous change data capture (CDC). This connector automatically detects changes in DynamoDB tables and streams them into Spice for real-time query, search, and LLM-inference.

Key Features:

  • Real-Time CDC: Automatically captures inserts, updates, and deletes from DynamoDB tables
  • Table Bootstrapping: Initial full table load before streaming changes

Example Spicepod.yml configuration:

datasets:
- from: dynamodb:my_table
name: orders_stream
acceleration:
enabled: true
refresh_mode: changes

For more details, refer to the DynamoDB Connector Documentation.

Cayenne Accelerator Enhancements

The Cayenne data accelerator now supports:

  • Sort Columns Configuration: Optimize inserts by pre-sorting data on specified columns for improved query performance

Example Spicepod.yml configuration:

datasets:
- from: s3://my-bucket/data.parquet
name: sorted_data
acceleration:
enabled: true
engine: cayenne
mode: file_create
params:
sort_columns: timestamp,region

For more details, refer to the Cayenne Documentation.

S3 Connector Improvements

S3 Location Predicate Pruning: The S3 data connector now supports location-based predicate pruning, dramatically reducing data scanned by pushing down predicates to S3 listing operations. This optimization is especially effective for partitioned datasets stored in S3.

AWS S3 Tables Write Support: Full read/write capability for AWS S3 Tables, enabling fast integration with AWS's table format for S3.

For more details, refer to the S3 Tables Data Connector Documentation and Glue Data Connection Documentation.

Faster Distributed Query Execution

Distributed query planning and execution have been significantly improved:

  • Fixed executor registration in cluster mode for more reliable distributed deployments
  • Improved hostname resolution for Flight server binding, enabling better executor discovery
  • Distributed accelerator registration: Data accelerators now properly register in distributed mode
  • Optimized query planning: DistributeFileScanOptimizer improvements for faster planning with large datasets

For more details, refer to the Distributed Query Documentation.

Search Improvements

Search capabilities have been improved with several performance and reliability enhancements:

  • Fixed FTS query blocking: Full-text search queries no longer block unnecessarily, improving query responsiveness
  • Optimized vector index operations: Eliminated unnecessary list_vectors calls for better performance
  • Improved limit pushdown: IndexerExec now properly handles limit pushdown for more efficient searches

For more details, refer to the Search Documentation.

Security Hardening

Multiple security improvements have been implemented:

  • SQL identifier quoting: Hardened SQL identifier quoting across all connectors to prevent injection attacks
  • Token redaction: Sensitive tokens are now fully redacted in debug output to prevent credential leakage
  • Path traversal prevention: Fixed tar extraction to prevent path traversal vulnerabilities
  • Input sanitization: Added validation for top_n_sample order_by parsing
  • Improved credential handling: Improved credential management in Glue connector

Developer Experience Improvements

  • Health probe metrics: Added health probe latency metrics for better observability
  • CLI improvements: Fixed .clear history command in the REPL to fully clear persisted history

Contributors

Breaking Changes

No breaking changes.

Cookbook Updates

No major cookbook updates. The Spice Cookbook still offers 82+ recipes to help you prototype quickly.

Upgrading

To try v1.10.0-rc1, use one of the following methods:

CLI:

spice upgrade --version 1.10.0-rc1

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.10.0-rc1 image:

docker pull spiceai/spiceai:1.10.0-rc1

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai --version 1.10.0-rc1

AWS Marketplace:

🎉 Spice is available in the AWS Marketplace.

What's Changed

Changelog

Spice v1.9.1 (Nov 24, 2025)

· 7 min read
Viktor Yershov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.9.1!🔥

v1.9.1 introduces Amazon Bedrock Nova 2 Multimodal embeddings support with high-dimensional vectors up to 3,072 dimensions and purpose-optimized embeddings for semantic search and retrieval operations, DynamoDB timestamp filter pushdown for more efficient append-mode acceleration with configurable time formatting, HTTP Data Connector health probe configuration for improved endpoint validation reliability, and Spice .NET SDK v0.2 with expanded .NET version support and updated gRPC libraries. This release focuses on bug fixes, stability, and performance improvements.

Amazon Bedrock Nova 2 Multimodal embeddings

Spice now supports the Amazon Nova 2 Multimodal embeddings models via the Bedrock models provider, enabling high-quality text embeddings for semantic search and vector similarity operations. The Nova embeddings model offers configurable dimensions and advanced features like truncation modes and embedding purpose optimization.

Key Features:

  • High-Dimensional Embeddings: Support for up to 3,072 dimensions for rich semantic representations
  • Configurable Truncation: Control how input text is truncated when exceeding token limits (START, END, or NONE)
  • Purpose Optimization: Optimize embeddings for specific use cases (GENERIC_INDEX, GENERIC_RETRIEVAL, or CLASSIFICATION)
  • Multimodal Model: Leverages Amazon's Nova 2 multimodal architecture for consistent embeddings across different content types

Example spicepod.yml configuration:

embeddings:
- from: bedrock:amazon.nova-2-multimodal-embeddings-v1:0
name: nova_embeddings
params:
dimensions: '3072' # Required: Output dimensions
truncation_mode: START # Optional: START, END, or NONE (default: NONE)
embedding_purpose: GENERIC_RETRIEVAL # Optional. GENERIC_INDEX is default

For more details on the embedding parameters and configuration options, refer to the Amazon Nova Embeddings Documentation and the Spice Embeddings Documentation.

DynamoDB Timestamp Filter Pushdown

The DynamoDB Data Connector now supports timestamp filter pushdown, enabling more efficient append-mode acceleration refreshes by pushing timestamp filters directly to DynamoDB queries. Since DynamoDB stores timestamps as strings rather than native datetime types, this feature includes configurable timestamp formatting to ensure correct parsing and filtering.

Key Features:

  • Filters on timestamp columns are now pushed down to DynamoDB, reducing data transfer and improving query performance
  • Support for Go-style datetime formatting patterns to handle various timestamp string formats
  • Uses ISO 8601 format by default when no custom format is specified

Example spicepod.yml configuration:

datasets:
- from: dynamodb:sales
name: sales
time_column: created_at
time_format: timestamptz
params:
time_format: 2006-01-02T15:04:05.000Z07:00
acceleration:
enabled: true
engine: duckdb
refresh_mode: append

For more details, refer to the DynamoDB Data Connector Documentation.

HTTP Data Connector Health Probe Configuration

The HTTP Data Connector now supports configurable health probe paths for endpoint validation. Instead of using a random non-existent path, the system can now validate endpoints using a user-specified path, improving flexibility and reliability for health checks.

Example spicepod.yml configuration:

datasets:
- from: https://api.tvmaze.com
name: tvmaze
params:
file_format: json
health_probe: /health-check

For more details, refer to the HTTP Data Connector Documentation.

Spice .NET SDK v0.2

The Spice .NET SDK has been upgraded with expanded .NET version support, custom User-Agent configuration, and updated gRPC libraries: spice-dotnet v0.2.0. The SDK is available on NuGet.

Key Features:

  • Expanded .NET Support: Now supports .NET Standard 2.0, .NET Core 8.0, 9.0, and 10.0.
  • Custom User-Agent: Configure custom User-Agent headers for client identification and telemetry.
  • Updated gRPC Libraries: Upgraded gRPC dependencies and netstandard for improved performance and reliability

Upgrade Example:

dotnet add package SpiceAI --version 0.2.0

For more details, refer to the .NET SDK Documentation.

Additional Improvements & Bug Fixes

  • Reliability: Fixed view loading to respect topological order, preventing dependency resolution errors.
  • Reliability: Migrated from deprecated trust_dns_resolver to hickory_resolver for improved DNS resolution reliability.
  • Security: Fixed arbitrary file access vulnerability during archive extraction ("Zip Slip") to prevent potential security exploits.
  • Distributed Query: Fixed object store initialization across scheduler/executor gap, improving reliability for distributed query execution.
  • Distributed Query: Optimized query routing by preventing runtime.* schema queries from being sent to the scheduler, improving performance for metadata queries.
  • Performance: Added Blake3 and xxHash support with xxh3_64 as the default caching hashing algorithm for improved cache and query performance.
  • Performance: Optimized default Zstd compression level to 6 for better balance between compression ratio and speed.
  • UX: Improved dataset loading output with clearer progress indicators and status messages.

Contributors

Breaking Changes

No breaking changes.

Cookbook Updates

No major cookbook updates.

The Spice Cookbook includes 82 recipes to help you get started with Spice quickly and easily.

Upgrading

To upgrade to v1.9.1, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.9.1 image:

docker pull spiceai/spiceai:1.9.1

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

🎉 Spice is now available in the AWS Marketplace!

What's Changed

Changelog

  • fix integration tests: order by the query to make snapshots deterministic by @phillipleblanc in #8198
  • Add health probe override by @lukekim in #8236
  • Use Moka optionally_get_with for SWR single-in-flight semantics by @lukekim in #8231
  • fix: Arbitrary file access during archive extraction ("Zip Slip") by @phillipleblanc in #8242
  • Migrate trust_dns_resolver to hickory_resolver by @phillipleblanc in #8243
  • fix: Deny assert macros in non-test code by @peasee in #8223
  • Distributed query: Object store initialization across scheduler/executor gap, misc bugfixes & improvements by @mach-kernel in #8009
  • Add Blake3, enable xxHash, set xxh3_64 as default, add bench by @lukekim in #8157
  • Make cache zstd default compression level 6 by @lukekim in #8234
  • Use seed for xxh3 by @lukekim in #8232
  • DynamoDB Timestamp Filter Pushdown by @krinart in #8235
  • Add ready_wait for mongo-arrow benchmarks by @krinart in #8246
  • Add support for amazon.nova-2-multimodal-embeddings-v1:0 by @Jeadie in #8225
  • Improve the output of dataset loading by @lukekim in #8256
  • Load views in topological order by @lukekim in #8255
  • Distributed query: Do not send runtime.* schema queries to scheduler by @mach-kernel in #8271
  • Remove input length check for Nova model. by @Jeadie in #8270