Skip to main content

Spice v2.0-rc.1 (Mar 4, 2026)

ยท 23 min read
Sergei Grebnov
Senior Software Engineer at Spice AI

Announcing the release of Spice v2.0-rc.1! ๐Ÿš€

v2.0.0-rc.1 is the first release candidate for early testing of v2.0.

Highlights in this release candidate include:

  • Active-Active Highly-Available Distributed Query that is object-store-native and built on Apache Ballista, with dynamic cluster sizing, distributed ingestion, and cluster observability
  • Spice Cayenne RC with staged append writes, file-based retention deletes, composite partitioning, and distributed ingestion
  • DataFusion v52.2.0 Upgrade with sort pushdown, a new merge join, and dynamic filters
  • DDL Support for CREATE TABLE and DROP TABLE via SQL for Iceberg and Cayenne catalogs
  • DuckLake Catalog & Data Connector for lakehouse-style data management
  • GCS Data Connector (Alpha) for Google Cloud Storage
  • Rust CLI Rewrite for a unified single-binary experience
  • Dependency upgrades including DuckDB v1.4.4, delta_kernel v0.18.2, and mistral.rs

Spice v2.0 includes several breaking changes. Review the breaking changes section before upgrading.

Distribution Changesโ€‹

AI/ML support including local LLM/ML model and hosted LLM inference is now included in the default Spice build and image. The separate models build variant has been removed.

With models now included by default, the data-only distribution (without AI/ML support) is only published in nightly builds. Official production-ready data-only distributions are available exclusively through Spice Cloud and the Enterprise release.

A new Network Attached Storage (NAS) distribution with built-in SMB and NFS data connector support is also now available in nightly builds and with Spice.ai Enterprise.

Distribution / VariantOpen SourceSpice CloudEnterprise
Defaultโœ…โœ…โœ…
DataNightly onlyโœ…โœ…
NAS (SMB + NFS)Nightly onlyโŒโœ…
Metal (macOS)โœ…โœ…โœ…
CUDA (Linux)Nightly onlyโœ…โœ…
Allocator variantsNightly onlyโœ…โœ…
ODBC connectorLocal build onlyโœ…โœ…

For more details, see the Distributions documentation.

What's New in v2.0.0-rc.1โ€‹

Active-Active HA Distributed Queryโ€‹

Distributed Query exits Beta with active-active highly-available object-store-based distributed query.

Distributed query supports two execution modes:

  • Synchronous: Queries for accelerated datasets are distributed across executors and results are streamed back in real-time. Non-accelerated datasets execute only on the scheduler. Best for interactive queries where low latency is critical.
  • Asynchronous: Queries are submitted via the new HTTP-only /v1/queries API and results are materialized to object storage for later retrieval. Best for long-running analytical workloads, batch processing, and non-accelerated datasets in distributed mode.

Key improvements:

  • Dynamic Cluster Sizing: The query planner automatically adjusts parallelism based on the number of active executors in the cluster, ensuring optimal resource utilization as nodes are added or removed.
  • Distributed Ingestion: Data ingestion for partitioned accelerated tables is now distributed across executor nodes, enabling higher throughput and parallel data loading in cluster mode. Regular (non-partitioned) accelerated tables do not distribute ingestion loads.
  • Synchronous Execution on Scheduler: /v1/sql and FlightSQL queries now execute synchronously on the scheduler when appropriate, reducing inter-node overhead for queries that don't benefit from distribution.
  • Faster Failure Detection: Executor heartbeat timeout reduced from 180s to 30s, enabling the cluster to quickly detect and respond to executor failures.
  • Cluster Observability: New metrics and Grafana dashboard for monitoring distributed query clusters.

Spice Cayenne Improvementsโ€‹

The Spice Cayenne data accelerator exits Beta with significant reliability and performance improvements:

  • Staged Append Writes: WAL-based staged append writes prevent partial writes and data loss on stream errors. Batches are written to a WAL file before being committed, ensuring atomicity.
  • File-Based Retention Deletes: Time-based retention now supports file-level deletes for both position-based and primary-key tables, reducing I/O overhead compared to row-level deletion.
  • Multiple Partition Expressions: Support for composite partitioning with partition_by: [col1, col2] using hierarchical path-like keys (e.g., 2025/10/15).
  • Distributed Ingestion: Cayenne catalog now supports distributed ingestion across executor nodes in cluster mode, including UPDATE operations.
  • Improved Robustness: Fixed CDC edge case where DELETE + UPSERT sequences could produce duplicate primary keys across protected snapshots. Improved upsert handling during runtime restarts.

DataFusion v52.2.0 Upgradeโ€‹

Apache DataFusion has been upgraded to v52.2.0, bringing significant performance improvements, new query features, and enhanced extensibility.

Performance Improvements:

  • Faster CASE Expressions: Lookup-table-based evaluation for certain CASE expressions avoids repeated evaluation, accelerating common ETL patterns
  • MIN/MAX Aggregate Dynamic Filters: Queries with MIN/MAX aggregates now create dynamic filters during scan to prune files and rows as tighter bounds are discovered during execution
  • New Merge Join: Rewritten sort-merge join (SMJ) operator with speedups of three orders of magnitude in pathological cases (e.g., TPC-H Q21: minutes โ†’ milliseconds)
  • Caching Improvements: New statistics cache for file metadata avoids repeatedly recalculating statistics, significantly improving planning time. A prefix-aware list-files cache accelerates evaluating partition predicates for Hive partitioned tables
  • Improved Hash Join Filter Pushdown: Build-side hash map contents are now passed dynamically to probe-side scans for pruning files, row groups, and individual rows

Major Features:

  • Sort Pushdown to Scans: Sorts are pushed into data sources, enabling ~30x performance improvement on pre-sorted data with top-K queries. Parquet scans now reverse row group order for DESC queries on ASC-sorted files
  • TableProvider supports DELETE and UPDATE: New hooks for DELETE and UPDATE statements in the TableProvider trait, enabling Iceberg and Cayenne connectors to implement SQL DELETE and UPDATE operations
  • More Extensible SQL Planning: New RelationPlanner API for extending SQL planning for FROM clauses, enabling support for vendor-specific SQL dialects

DDL Support for Iceberg and Cayenneโ€‹

SQL Schema Management: Spice now supports CREATE TABLE and DROP TABLE DDL operations for Iceberg and Cayenne catalogs via FlightSQL and the /v1/sql API. DML validation has been updated for catalog-level writability.

DuckLake Catalog & Data Connectorโ€‹

Lakehouse-Style Data Management: New DuckLake catalog and data connector enable lakehouse-style data management with DuckDB as the metadata catalog and object storage for data files. DuckLake provides ACID transactions, time travel, and schema evolution on top of Parquet files.

GCS Data Connector (Alpha)โ€‹

Google Cloud Storage Support: New Google Cloud Storage data connector enables federated queries against data stored in GCS buckets, with Iceberg table support.

Rust CLI Rewriteโ€‹

Unified Single-Binary Experience: The Spice CLI has been completely rewritten from Go to Rust, eliminating the Go dependency and providing a single spice binary built from the same codebase as spiced. This improves startup performance, reduces distribution size, and ensures consistent behavior between CLI and runtime.

Key Features:

  • Full Feature Parity: All 27+ CLI commands re-implemented in Rust with identical behavior
  • New spice query Command: Interactive REPL for async queries via the /v1/queries API with multi-line SQL input, spinner progress indicator, Ctrl+C cancellation, and partial query ID matching
  • --output=json Flag: Machine-readable JSON output for CLI commands, enabling scripting and automation
  • spice login --output: New output modes (env, json, keychain) for flexible credential management
  • spice cloud metrics: New command for Spice Cloud deployment metrics

Models Included by Defaultโ€‹

Local LLM/ML model inference (via mistral.rs) is now included in the default Spice build. The separate models build variant has been removed. This simplifies installation and ensures all users have access to local AI inference capabilities.

Error Propagation for Dataset and Model Status APIsโ€‹

The /v1/datasets and /v1/models APIs now return structured error information when a component is in an Error state. The ?status=true query parameter must be passed to retrieve the real-time component status, including the error state and details. Previously, the status field only indicated Error with no further detail. Now, two new fields are included when ?status=true is specified:

  • error: A structured object with category, type, and code fields for programmatic error handling (e.g. { "category": "dataset", "type": "auth", "code": "dataset.auth" }).
  • error_message: A human-readable description of why the component entered an error state.

These fields are only present when ?status=true is passed and the component is in an error state.

Example /v1/datasets?status=true response:

[
{
"from": "postgres:syncs",
"name": "daily_journal",
"replication_enabled": false,
"acceleration_enabled": true,
"status": "Ready"
},
{
"from": "databricks:hive_metastore.default.messages",
"name": "messages",
"replication_enabled": false,
"acceleration_enabled": true,
"status": "Error",
"error": {
"category": "dataset",
"type": "auth",
"code": "dataset.auth"
},
"error_message": "Unable to authenticate with datasource credentials"
}
]

The spice datasets and spice models CLI commands now include an ERROR column that displays the error message for any component in an error state.

Additional Dependency Upgradesโ€‹

DependencyVersion
Ballistav52.0.0
DuckDBv1.4.4
delta_kernelv0.18.2
mistral.rsv0.7.0 (candle fork removed, now uses candle 0.9.2 from crates.io)
Turso (libsql)v0.4.4
VortexUpgraded with CASE-WHEN support
AWS SDKMultiple crates updated + APN user-agent support

Other Improvementsโ€‹

  • Spicepod v2 Support: Spicepods now support version v2, and spice init generates spicepod.yaml files with version: v2 by default while maintaining backward compatibility for existing v1 spicepods.
  • x.ai Models: x.ai models now exclusively use the /v1/responses endpoint with rate limiting support.
  • HuggingFace Chat Templates: Added support for chat templates in HuggingFace model configurations.
  • Databricks SQL Dialect: Added Databricks SQL dialect for DataFusion unparser, improving federation query generation.
  • Snowflake: Added snowflake_private_key parameter for key-pair authentication.
  • Acceleration Metrics: New rows_written, bytes_written, and dataset_acceleration_size_bytes metrics for acceleration refresh ingestion.
  • Refresh SQL UDFs: Core scalar UDFs are now enabled in refresh SQL expressions.
  • FlightSQL: Fixed TLS connection handling for grpc+tls:// endpoints with custom CA certificate support.
  • FlightSQL: Fixed schema consistency by expanding view types and verifying field names.
  • Hash Index: Fixed query correctness when hash index is used with additional filters.
  • Results Cache: Fixed schema preservation for empty query results.
  • Query Nullability: Reconciled execution stream nullability with logical plan schema.
  • Schema Evolution: Graceful handling of schema evolution mismatch errors during data refresh.
  • Internal YAML Parser: Replaced deprecated serde_yaml with an internal YAML implementation.

Spicepod v1 to v2 Changesโ€‹

Spicepod v2 introduces configuration improvements while maintaining backward compatibility with v1. Existing v1 spicepods continue to work โ€” deprecated fields are automatically migrated at load time.

Version support:

VersionStatus
v2Default. Used by spice init.
v1Supported. Deprecated fields auto-migrate.
v1beta1Removed. No longer accepted.

Configuration changes:

v1 (deprecated)v2 (preferred)Notes
runtime.results_cacheruntime.caching.sql_resultsAll fields migrate automatically. cache_max_size โ†’ max_size.
runtime.memory_limitruntime.query.memory_limitAuto-migrated. query.memory_limit takes priority if both set.
runtime.temp_directoryruntime.query.temp_directoryAuto-migrated. query.temp_directory takes priority if both set.
dataset.invalid_type_actiondataset.unsupported_type_actionAuto-migrated. v2 adds a new string variant.

New v2 fields:

  • runtime.ready_state โ€” Controls when the runtime reports ready (on_load default, or on_registration).
  • runtime.flight.do_put_rate_limit_enabled โ€” Enable/disable FlightSQL DoPut rate limiting (default: true).
  • runtime.query.spill_compression โ€” Compression for query spill files (e.g., lz4_frame).
  • runtime.scheduler.partition_management โ€” Configure partition assignment interval, limits, and timeouts for distributed mode.
  • runtime.caching.sql_results.stale_while_revalidate_ttl โ€” Serve stale cached results while revalidating in the background.
  • runtime.caching.sql_results.encoding โ€” Cache entry compression (e.g., zstd).
  • catalog.access: read_write_create โ€” New access mode for catalogs that support DDL operations.

Migration note: When both the deprecated v1 field and its v2 equivalent are set, the v2 field takes priority.

Contributorsโ€‹

Breaking Changesโ€‹

  • Cayenne and Distributed Query exit Beta: Beta warnings have been removed from documentation and code. Both features are now considered GA-ready.
  • Models included by default: The separate models build variant has been removed. Local LLM inference is now always included.
  • Spicepod version defaults to v2: New spicepods created with spice init now default to version: v2. Existing v1 spicepods remain supported, and v1beta1 is no longer accepted.
  • Windows native builds removed: Native Windows builds are no longer provided. Use WSL for local development instead.
  • Metric renames: accelerated_refresh metrics renamed to acceleration_refresh for consistency. last_refresh_time gauge renamed to include milliseconds unit.
  • Caching config renamed: ResultsCache replaced with SQLResultsCacheConfig in configuration.
  • DuckDB parameter rename: partitioned_write_flush_threshold renamed to partitioned_write_flush_threshold_rows.
  • v1/search API: The /v1/search API now always returns an array in matches, even for single results.
  • x.ai model endpoint: x.ai models now exclusively use the /v1/responses endpoint.
  • Error messages: Error messages across S3 Vectors, ScyllaDB, Snowflake, ClickHouse, and other components have been refactored for clarity and consistency.

Cookbook Updatesโ€‹

New and updated Spice Cookbook recipes:

  • Async Queries: Submit long-running queries asynchronously and retrieve results later.
  • DuckLake Catalog Connector: Use DuckLake for lakehouse-style data management with ACID transactions and time travel.

The Spice Cookbook includes 88 recipes to help you get started with Spice quickly and easily.

Upgradingโ€‹

To upgrade to v2.0.0-rc.1, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:2.0.0-rc.1 image:

docker pull spiceai/spiceai:2.0.0-rc.1

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai --version 2.0.0-rc.1

AWS Marketplace:

Spice is available in the AWS Marketplace.

What's Changedโ€‹

Changelogโ€‹

  • Add TPC-DS integration tests with S3 source and PostgreSQL acceleration by @phillipleblanc in #9006
  • fix(tests): fix flaky/slow/failing unit tests by @phillipleblanc in #9009
  • fix: Update benchmark snapshots for DF51 upgrade by @app/github-actions in #9008
  • fix: add feature gate to rrf TEST_EMBEDDING_MODEL by @phillipleblanc in #9017
  • fix: features check by @phillipleblanc in #9014
  • fix: Enable Cayenne acceleration snapshots by @lukekim in #9020
  • URL table support by @lukekim in #9018
  • ScyllaDB key filter by @lukekim in #8997
  • fix: Schema mismatch when using column projection with HTTP caching by @phillipleblanc in #9021
  • Add more tests for HTTP caching with columns selection by @sgrebnov in #9025
  • HTTP cache snapshots: default to time_interval and fix snapshots_creation_policy: on_change by @sgrebnov in #9026
  • Fix duplicate snapshot creation on startup by @sgrebnov in #9029
  • Add ScyllaDB and SMB to the README table by @krinart in #9034
  • Remove waiting for runtime to be ready before creating snapshot by @krinart in #9033
  • Fix snapshot on_change policy to skip when no writes occurred by @sgrebnov in #9028
  • Release notes for release release/1.11.0-rc.2 by @krinart in #9016
  • ci: use arduino/setup-protoc for official protobuf compiler by @phillipleblanc in #9036
  • ci: install unzip on aarch64 runner for arduino/setup-protoc by @phillipleblanc in #9038
  • fix: don't fail release if upload to minio fails by @phillipleblanc in #9039
  • Add missing protoc step to setup-cc action by @krinart in #9041
  • fix: Update Search integration test snapshots by @app/github-actions in #9013
  • Fix formula_1 and codebase_community in bird-bench by @Jeadie in #9000
  • Cayenne S3 Express One Zone improvements by @lukekim in #9015
  • Add zlib1g-dev to CI by @lukekim in #9052
  • Improve validation and logging for hash indexes by @lukekim in #9047
  • Upgrade Vortex with CASE-WHEN by @lukekim in #9051
  • x.ai models now exclusively use /v1/responses endpoint by @lukekim in #9400
  • Improvements for snapshot schema comparison by @krinart in #9401
  • v2.0 breaking changes by @lukekim in #9233
  • Create PartitionManagementTask for scheduler to update accelerated table partition assignments by @Jeadie in #9378
  • refactor(Cayenne): route all write orchestration through CayenneDataSink by @sgrebnov in #9402
  • Refactor benchmark to use QueryExecutor trait by @Jeadie in #9418
  • feat: Add spidapter build and release workflow by @peasee in #9427
  • Testoperator: add support for api-key when connecting to external spice instance by @sgrebnov in #9421
  • Initial implementation of Ducklake catalog & data connectors by @lukekim in #9083
  • Require aws_lc_rs since jsonwebtoken upgrade by @Jeadie in #9426
  • feat: Add spidapter tool by @peasee in #9425
  • Add release notes for 1.11.2 patch release by @sgrebnov in #9430
  • feat(spidapter): integrate system-adapter-protocol with SCP provisioning by @phillipleblanc in #9434
  • Add DuckLake TPCH E2E workflow and federated Spicepod configuration by @lukekim in #9431
  • fix(spidapter): use Flight handshake auth instead of x-api-key header by @phillipleblanc in #9435
  • [spidapter] Keep only what sparks joy by @Jeadie in #9439
  • Refactor binary operator balancing by @Jeadie in #9424
  • feat: Add Iceberg DDL support (CREATE TABLE / DROP TABLE) for default catalog override by @phillipleblanc in #9440
  • Fix Flight SQL schema consistency: expand view types and verify field names by @sgrebnov in #9438
  • Update spidapter for new system-adapter-protocol by @sgrebnov in #9442
  • docs: fix typos and syntax errors in style guide and error handling docs by @cluster2600 in #9445
  • Add acceleration refresh ingestion metrics (rows_written, bytes_written) by @phillipleblanc in #9461
  • Refactor(Cayenne): Replace CatalogError and string based errors with Snafu errors by @sgrebnov in #9403
  • Replace deprecated claude-3-5-haiku-latest with claude-haiku-4-5 by @Jeadie in #9492
  • Fix #9481: Preserve schema in results cache for empty query results by @phillipleblanc in #9485
  • Fix partition by serializing by @Jeadie in #9474
  • query: reconcile execution stream nullability with logical plan schema by @phillipleblanc in #9486
  • initial spice-cloud-client crate and spice cloud metrics --app <app-name>. by @Jeadie in #9480
  • feat: Return dataset error message in datasets API by @peasee in #9487
  • Spicebench by @lukekim in #9447
  • build(deps): consolidate dependabot dependency updates by @phillipleblanc in #9504
  • fix(cluster): route non-partitioned accelerated tables in distributed mode by @phillipleblanc in #9508
  • Enable core scalar UDFs in refresh SQL by @sgrebnov in #9502
  • Fix metrics in Spidapter again by @Jeadie in #9497
  • fix(cluster): tolerate Completed->status propagation race in distributed query handle by @phillipleblanc in #9510
  • feat: Support distributed ingestion in cayenne catalog by @peasee in #9506
  • Fix Cayenne duplicate primary keys after DELETE + UPSERT CDC sequences by @krinart in #9494
  • fix(cluster): rewrite table scans inside subqueries for distributed execution by @phillipleblanc in #9518
  • fix: Set catalog mode to readwritecreate in spidapter by @peasee in #9519
  • Upgrade AWS SDK crates & set APN user-agent in AWS SDK credential bridge by @lukekim in #8328
  • feat(runtime): add runtime ready_state on_registration semantics by @lukekim in #9522
  • fix: Add spidapter post-setup retries by @peasee in #9526
  • Make partition discovery more robust and make initialization non-blocking by @sgrebnov in #9499
  • Make lint-rust-fix support targeted packages and features by @Jeadie in #9511
  • Handle new Cloud SCP API by @Jeadie in #9532
  • Refactor and simplify streaming benchmarks by @krinart in #9405
  • fix: ensure spidapter only increments attempts on failures by @peasee in #9534
  • feat: Support specifying app resources in spidapter by @peasee in #9536
  • test(runtime): Spice Cayenne DDL integration test by @lukekim in #9535
  • fix: Handle schema evolution mismatch errors during data refresh by @lukekim in #9527
  • fix: resolve clippy lint warnings by @phillipleblanc in #9547
  • pr-builds --tag <TAG> for build_and_release.yml by @Jeadie in #9507
  • Add --output flag to spice login with env/json/keychain modes by @Jeadie in #9541
  • Don't use 'PartitionedTableScanRewrite' in async distributed query by @Jeadie in #9548
  • feat(spidapter): add local backend mode with single executor by @phillipleblanc in #9531
  • support chat template in HF by @Jeadie in #9543
  • fix(cayenne): stream PK retention deletes and run OOM regression in CI by @phillipleblanc in #9533
  • cayenne: Staged append writes to prevent partial writes and data loss on stream error by @sgrebnov in #9491
  • AcceleratedTable::scan use FederatedTable::scan when ClusterRole::Scheduler by @Jeadie in #9550
  • Upgrade to delta-kernel-rs v0.18.2 by @lukekim in #9528
  • Run cayenne tests as part of PR CI by @sgrebnov in #9554
  • Upgrade to DataFusion v52.2.0 by @lukekim in #9419
  • Remove Snapshot Compaction + Add snapshot existence check by @krinart in #9523
  • Update dependencies by @lukekim in #9566
  • fix: Update benchmark snapshots by @app/github-actions in #9565
  • fix: Compare Cayenne table configuration on startup by @peasee in #9529
  • Make Refresh::refresh_sql more robust to alterations over time. by @Jeadie in #9549
  • fix: Update datafusion-table-providers dependency to latest revision by @lukekim in #9574
  • Unset AWS_ENDPOINT_URL when empty by @krinart in #9575
  • fix: allow BytesProcessedExec repartitioning for unordered input by @lukekim in #9540
  • Sanitize DataFusion errors by @lukekim in #9530
  • Add conditional logging for partition assignments by @Jeadie in #9577
  • use 'properly early exit on SIGTERM' by @Jeadie in #9573
  • Update datafusion to 52.2.0 by @phillipleblanc in #9582
  • Ensure we query one and only one partition per request by @Jeadie in #9416
  • feat: Add support for Spicepod version v2 by @lukekim in #9583
  • [SpiceDQ] Improve error messages; Avoid race condition on allocate_initial_partitions. by @Jeadie in #9579
  • Update ballista dependencies to latest 52.0.0 revision by @lukekim in #9581
  • Fix Databricks spark_connect mode always disabled by @phillipleblanc in #9586
  • Support partitioning in Arrow accelerator by @Jeadie in #9571
  • Fix spice query CLI response deserialization by @phillipleblanc in #9588
  • fix: Update benchmark snapshots by @app/github-actions in #9584
  • fix: Share RuntimeEnv across Cayenne read/write/delete paths for targeted list_files_cache invalidation by @sgrebnov in #9589
  • feat: Add file:// state_location support for async queries scheduler by @phillipleblanc in #9590
  • Update endgame links by @krinart in #9598

Full Changelog: https://github.com/spiceai/spiceai/compare/v1.11.2...v2.0.0-rc.1

Spice v1.11.1 (Feb 10, 2026)

ยท 4 min read
Jack Eadie
Token Plumber at Spice AI

Announcing the release of Spice v1.11.1! ๐Ÿ› ๏ธ

v1.11.1 is a patch release improving Spice Cayenne accelerator reliability and performance, enhancing DynamoDB Streams and HTTP data connectors, and fixing issues in Federated Task History and FlightSQL.

What's New in v1.11.1โ€‹

Spice Cayenne Accelerator Improvementsโ€‹

This release includes stability and performance fixes for the Spice Cayenne accelerator:

  • Row-based Deletion Logic: Refactored row-based delete operations to use per-file deletion vectors with RoaringBitmap. Deletion scans now use Vortex-native streaming with filter pushdown and project only row indices, achieving zero data I/O for delete operations.
  • Constraints & On Conflict: constraints and on_conflict configurations are now automatically inferred from federated table metadata, enabling datasets like DynamoDB to work without explicitly defining primary_key in the Spicepod.
  • Partitioned Table Deletion: Fixed an issue where DELETE operations on partitioned Cayenne tables failed.
  • Data Integrity: Fixed two issues with acceleration snapshot handling: protected snapshots are now included in conflict detection keyset scans (preventing duplicate key creation during append refresh), and snapshot cleanup no longer deletes protected snapshots.

Data Connector Improvementsโ€‹

  • DynamoDB Streams: Added automatic re-bootstrapping when the stream lag exceeds DynamoDB shard retention (24h). Configurable via the new lag_exceeds_shard_retention_behavior parameter with values error (default), ready_before_load, or ready_after_load.
  • HTTP Connector: HTTP responses now include a response_status column (UInt16). 4xx responses (e.g., 404 Not Found) are treated as valid queryable data and cached normally. 5xx responses are retried with backoff, returned to the user, but excluded from the cache to prevent transient server errors from polluting cached results.

Other Improvementsโ€‹

  • Reliability: Added retries for SnapshotManager operations and general snapshot reliability improvements.
  • Reliability: Fixed handling of timestamp precision mismatches in query result caching.
  • Reliability: Fixed a double projection issue in federated task history queries that caused Schema error: project index out of bounds errors in cluster mode.
  • Developer Experience: Added cookie middleware support to the FlightSQL data connector.

Contributorsโ€‹

Breaking Changesโ€‹

No breaking changes.

Cookbook Updatesโ€‹

No major cookbook updates. The Spice Cookbook includes 86 recipes to help you get started with Spice quickly and easily.

Upgradingโ€‹

To upgrade to v1.11.1, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.11.1 image:

docker pull spiceai/spiceai:1.11.1

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai --version 1.11.1

AWS Marketplace:

Spice is available in the AWS Marketplace.

What's Changedโ€‹

Changelogโ€‹

  • Cayenne: row-based delete logic improvements by @sgrebnov in #9237
  • Proper support for constraints/on_conflict in Cayenne Acceleration by @krinart in #9335
  • Retries for SnapshotManager by @krinart in #9334
  • fix(cayenne): Include protected snapshots in conflict detection keyset scan by @sgrebnov in #9176
  • fix(cayenne): Fix data loss by preserving protected snapshots during cleanup by @sgrebnov in #9182
  • Simplify retention filter expressions before pushdown by @sgrebnov in #9244
  • Fix test_retention_complex_sql by @sgrebnov in #9270
  • runtime: avoid double projection in federated task history by @phillipleblanc in #9326
  • feat(http): Return all HTTP responses as data, skip caching 5xx by @sgrebnov in #9313
  • Snapshots Improvements by @krinart in #9318
  • fix(caching): Handle timestamp precision mismatch and add more tests by @sgrebnov in #9315
  • DynamoDB Streams Table Rebootstrapping by @krinart in #9305
  • Fix Cayenne partitioned table deletion support by @sgrebnov in #9267
  • FlightSQL: add cookie middleware support by @phillipleblanc in #9282
  • Apply SchemaCastScanExec before applying changes in process_upsert_batch by @krinart in #9297

Spice v1.11.0 (Jan 28, 2026)

ยท 58 min read
William Croxson
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.11.0-stable! โšก

In Spice v1.11.0, Spice Cayenne reaches Beta status with acceleration snapshots, Key-based deletion vectors, and Amazon S3 Express One Zone support. DataFusion has been upgraded to v51 along with Arrow v57.2, and iceberg-rust v0.8.0. v1.11 adds several DynamoDB & DynamoDB Streams improvements such as JSON nesting, and adds significant improvements to Distributed Query with active-active schedulers and mTLS for enterprise-grade high-availability and secure cluster communication.

This release also adds new SMB, NFS, and ScyllaDB Data Connectors (Alpha), Prepared Statements with full SDK support (gospice, spice-rs, spice-dotnet, spice-java, spice.js, and spicepy), Google LLM Support for expanded AI inference capabilities, and significant improvements to caching, observability, and Hash Indexing for Arrow Acceleration.

What's New in v1.11.0โ€‹

Spice Cayenne Accelerator Reaches Betaโ€‹

Spice Cayenne has been promoted to Beta status with acceleration snapshots support and numerous performance and stability improvements.

Key Enhancements:

  • Key-based Deletion Vectors: Improved deletion vector support using key-based lookups for more efficient data management and faster delete operations. Key-based deletion vectors are more memory-efficient than positional vectors for sparse deletions.
  • S3 Express One Zone Support: Store Cayenne data files in S3 Express One Zone for single-digit millisecond latency, ideal for latency-sensitive query workloads that require persistence.

Improved Reliability:

  • Resolved FuturesUnordered reentrant drop crashes
  • Fixed memory growth issues related to Vortex metrics allocation
  • Metadata catalog now properly respects cayenne_file_path location
  • Added warnings for unparseable configuration values

For more details, refer to the Cayenne Documentation.

DataFusion v51 Upgradeโ€‹

Apache DataFusion has been upgraded to v51, bringing significant performance improvements, new SQL features, and enhanced observability.

DataFusion v51 ClickBench Performance

Performance Improvements:

  • Faster CASE Expression Evaluation: Expressions now short-circuit earlier, reuse partial results, and avoid unnecessary scattering, speeding up common ETL patterns
  • Better Defaults for Remote Parquet Reads: DataFusion now fetches the last 512KB of Parquet files by default, typically avoiding 2 I/O requests per file
  • Faster Parquet Metadata Parsing: Leverages Arrow 57's new thrift metadata parser for up to 4x faster metadata parsing

New SQL Features:

  • SQL Pipe Operators: Support for |> syntax for inline transforms
  • DESCRIBE <query>: Returns the schema of any query without executing it
  • Named Arguments in SQL Functions: PostgreSQL-style param => value syntax for scalar, aggregate, and window functions
  • Decimal32/Decimal64 Support: New Arrow types supported including aggregations like SUM, AVG, and MIN/MAX

Example pipe operator:

SELECT * FROM t
|> WHERE a > 10
|> ORDER BY b
|> LIMIT 5;

Improved Observability:

  • Improved EXPLAIN ANALYZE Metrics: New metrics including output_bytes, selectivity for filters, reduction_factor for aggregates, and detailed timing breakdowns

Arrow 57.2 Upgradeโ€‹

Apache Arrow has been upgraded to v57.2, bringing major performance improvements and new capabilities.

Arrow 57 Parquet Metadata Parsing Performance

Key Features:

  • 4x Faster Parquet Metadata Parsing: A rewritten thrift metadata parser delivers up to 4x faster metadata parsing, especially beneficial for low-latency use cases and files with large amounts of metadata
  • Parquet Variant Support: Experimental support for reading and writing the new Parquet Variant type for semi-structured data, including shredded variant values
  • Parquet Geometry Support: Read and write support for Parquet Geometry types (GEOMETRY and GEOGRAPHY) with GeospatialStatistics
  • New arrow-avro Crate: Efficient conversion between Apache Avro and Arrow RecordBatches with projection pushdown and vectorized execution support

DynamoDB Connector Enhancementsโ€‹

  • Added JSON nesting for DynamoDB Streams
  • Improved batch deletion handling

Distributed Query Improvementsโ€‹

High Availability Clusters: Spice now supports running multiple active schedulers in an active/active configuration for production deployments. This eliminates the scheduler as a single point of failure and enables graceful handling of node failures.

  • Multiple schedulers run simultaneously, each capable of accepting queries
  • Schedulers coordinate via a shared S3-compatible object store
  • Executors discover all schedulers automatically
  • A load balancer distributes client queries across schedulers

Example HA configuration:

runtime:
scheduler:
state_location: s3://my-bucket/spice-cluster
params:
region: us-east-1

mTLS Verification: Cluster communication between scheduler and executors now supports mutual TLS verification for enhanced security.

Credential Propagation: S3, ABFS, and GCS credentials are now automatically propagated to executors in cluster mode, enabling access to cloud storage across the distributed query cluster.

Improved Resilience:

  • Exponential backoff for scheduler disconnection recovery
  • Increased gRPC message size limit from 16MB to 100MB for large query plans
  • HTTP health endpoint for cluster executors
  • Automatic executor role inference when --scheduler-address is provided

For more details, refer to the Distributed Query Documentation.

iceberg-rust v0.8.0 Upgradeโ€‹

Spice has been upgraded to iceberg-rust v0.8.0, bringing improved Iceberg table support.

Key Features:

  • V3 Metadata Support: Full support for Iceberg V3 table metadata format
  • INSERT INTO Partitioned Tables: DataFusion integration now supports inserting data into partitioned Iceberg tables
  • Improved Delete File Handling: Better support for position and equality delete files, including shared delete file loading and caching
  • SQL Catalog Updates: Implement update_table and register_table for SQL catalog
  • S3 Tables Catalog: Implement update_table for S3 Tables catalog
  • Enhanced Arrow Integration: Convert Arrow schema to Iceberg schema with auto-assigned field IDs, _file column support, and Date32 type support

Acceleration Snapshotsโ€‹

Acceleration snapshots enable point-in-time recovery and data versioning for accelerated datasets. Snapshots capture the state of accelerated data at specific points, allowing for fast bootstrap recovery and rollback capabilities.

Key Features:

  • Flexible Triggers: Configure when snapshots are created based on time intervals or stream batch counts
  • Automatic Compaction: Reduce storage overhead by compacting older snapshots (DuckDB only)
  • Bootstrap Integration: Snapshots can reset cache expiry on load for seamless recovery (DuckDB with Caching refresh mode)
  • Smart Creation Policies: Only create snapshots when data has actually changed

Example configuration:

datasets:
- from: s3://my-bucket/data.parquet
name: my_dataset
acceleration:
enabled: true
engine: cayenne
mode: file
snapshots: enabled
snapshots_trigger: time_interval
snapshots_trigger_threshold: 1h
snapshots_creation_policy: on_changed

Snapshots API and CLI: New API endpoints and CLI commands for managing snapshots programmatically.

CLI Commands:

# List all snapshots for a dataset
spice acceleration snapshots taxi_trips

# Get details of a specific snapshot
spice acceleration snapshot taxi_trips 3

# Set the current snapshot for rollback (requires runtime restart)
spice acceleration set-snapshot taxi_trips 2

HTTP API Endpoints:

MethodEndpointDescription
GET/v1/datasets/{dataset}/acceleration/snapshotsList all snapshots for a dataset
GET/v1/datasets/{dataset}/acceleration/snapshots/{id}Get details of a specific snapshot
POST/v1/datasets/{dataset}/acceleration/snapshots/currentSet the current snapshot for rollback

For more details, refer to the Acceleration Snapshots Documentation.

Caching Acceleration Mode Improvementsโ€‹

The Caching Acceleration Mode introduced in v1.10.0 has received significant performance optimizations and reliability fixes in this release.

Performance Optimizations:

  • Non-blocking Cache Writes: Cache misses no longer block query responses. Data is written to the cache asynchronously after the query returns, reducing query latency for cache miss scenarios.
  • Batch Cache Writes: Multiple cache entries are now written in batches rather than individually, significantly improving write throughput for high-volume cache operations.

Reliability Fixes:

  • Correct SWR Refresh Behavior: The stale-while-revalidate (SWR) pattern now correctly refreshes only the specific entries that were accessed instead of refreshing all stale rows in the dataset. This prevents unnecessary source queries and reduces load on upstream data sources.
  • Deduplicated Refresh Requests: Fixed an issue where JSON array responses could trigger multiple redundant refresh operations. Refresh requests are now properly deduplicated.
  • Fixed Cache Hit Detection: Resolved an issue where queries that didn't include fetched_at in their projection would always result in cache misses, even when cached data was available.
  • Unfiltered Query Optimization: SELECT * queries without filters now return cached data directly without unnecessary filtering overhead.

For more details, refer to the Caching Acceleration Mode Documentation.

Prepared Statementsโ€‹

Improved Query Performance and Security: Spice now supports prepared statements, enabling parameterized queries that improve both performance through query plan caching and security by preventing SQL injection attacks.

Key Features:

  • Query Plan Caching: Prepared statements cache query plans, reducing planning overhead for repeated queries
  • SQL Injection Prevention: Parameters are safely bound, preventing SQL injection vulnerabilities
  • Arrow Flight SQL Support: Full prepared statement support via Arrow Flight SQL protocol

SDK Support:

SDKSupportMin VersionMethod
gospice (Go)โœ… Fullv8.0.0+SqlWithParams() with typed constructors (Int32Param, StringParam, TimestampParam, etc.)
spice-rs (Rust)โœ… Fullv3.0.0+query_with_params() with RecordBatch parameters
spice-dotnet (.NET)โœ… Fullv0.3.0+QueryWithParams() with typed parameter builders
spice-java (Java)โœ… Fullv0.5.0+queryWithParams() with typed Param constructors (Param.int64(), Param.string(), etc.)
spice.js (JavaScript)โœ… Fullv3.1.0+query() with parameterized query support
spicepy (Python)โœ… Fullv3.1.0+query() with parameterized query support

Example (Go):

import "github.com/spiceai/gospice/v8"

client, _ := spice.NewClient()
defer client.Close()

// Parameterized query with typed parameters
results, _ := client.SqlWithParams(ctx,
"SELECT * FROM products WHERE price > $1 AND category = $2",
spice.Float64Param(10.0),
spice.StringParam("electronics"),
)

Example (Java):

import ai.spice.SpiceClient;
import ai.spice.Param;
import org.apache.arrow.adbc.core.ArrowReader;

try (SpiceClient client = new SpiceClient()) {
// With automatic type inference
ArrowReader reader = client.queryWithParams(
"SELECT * FROM products WHERE price > $1 AND category = $2",
10.0, "electronics");

// With explicit typed parameters
ArrowReader reader = client.queryWithParams(
"SELECT * FROM products WHERE price > $1 AND category = $2",
Param.float64(10.0),
Param.string("electronics"));
}

For more details, refer to the Parameterized Queries Documentation.

Spice Java SDK v0.5.0โ€‹

Parameterized Query Support for Java: The Spice Java SDK v0.5.0 introduces parameterized queries using ADBC (Arrow Database Connectivity), providing a safer and more efficient way to execute queries with dynamic parameters.

Key Features:

  • SQL Injection Prevention: Parameters are safely bound, preventing SQL injection vulnerabilities
  • Automatic Type Inference: Java types are automatically mapped to Arrow types (e.g., double โ†’ Float64, String โ†’ Utf8)
  • Explicit Type Control: Use the new Param class with typed factory methods (Param.int64(), Param.string(), Param.decimal128(), etc.) for precise control over Arrow types
  • Updated Dependencies: Apache Arrow Flight SQL upgraded to 18.3.0, plus new ADBC driver support

Example:

import ai.spice.SpiceClient;
import ai.spice.Param;

try (SpiceClient client = new SpiceClient()) {
// With automatic type inference
ArrowReader reader = client.queryWithParams(
"SELECT * FROM taxi_trips WHERE trip_distance > $1 LIMIT 10",
5.0);

// With explicit typed parameters for precise control
ArrowReader reader = client.queryWithParams(
"SELECT * FROM orders WHERE order_id = $1 AND amount >= $2",
Param.int64(12345),
Param.decimal128(new BigDecimal("99.99"), 10, 2));
}

Maven:

<dependency>
<groupId>ai.spice</groupId>
<artifactId>spiceai</artifactId>
<version>0.5.0</version>
</dependency>

For more details, refer to the Spice Java SDK Repository.

Google LLM Supportโ€‹

Expanded AI Provider Support: Spice now supports Google embedding and chat models via the Google AI provider, expanding the available LLM options for AI inference workloads alongside existing providers like OpenAI, Anthropic, and AWS Bedrock.

Key Features:

  • Google Chat Models: Access Google's Gemini models for chat completions
  • Google Embeddings: Generate embeddings using Google's text embedding models
  • Unified API: Use the same OpenAI-compatible API endpoints for all LLM providers

Example spicepod.yaml configuration:

models:
- from: google:gemini-2.0-flash
name: gemini
params:
google_api_key: ${secrets:GOOGLE_API_KEY}

embeddings:
- from: google:text-embedding-004
name: google_embeddings
params:
google_api_key: ${secrets:GOOGLE_API_KEY}

For more details, refer to the Google LLM Documentation (see docs PR #1286).

URL Tablesโ€‹

Query data sources directly via URL in SQL without prior dataset registration. Supports S3, Azure Blob Storage, and HTTP/HTTPS URLs with automatic format detection and partition inference.

Supported Patterns:

  • Single files: SELECT * FROM 's3://bucket/data.parquet'
  • Directories/prefixes: SELECT * FROM 's3://bucket/data/'
  • Glob patterns: SELECT * FROM 's3://bucket/year=*/month=*/data.parquet'

Key Features:

  • Automatic file format detection (Parquet, CSV, JSON, etc.)
  • Hive-style partition inference with filter pushdown
  • Schema inference from files
  • Works with both SQL and DataFrame APIs

Example with hive partitioning:

-- Partitions are automatically inferred from paths
SELECT * FROM 's3://bucket/data/' WHERE year = '2024' AND month = '01'

Enable via spicepod.yml:

runtime:
params:
url_tables: enabled

Cluster Mode Async Query APIs (experimental)โ€‹

New asynchronous query APIs for long-running queries in cluster mode:

  • /v1/queries endpoint: Submit queries and retrieve results asynchronously

OpenTelemetry Improvementsโ€‹

Unified Telemetry Endpoint: OTel metrics ingestion has been consolidated to the Flight port (50051), simplifying deployment by removing the separate OTel port (50052). The push-based metrics exporter continues to support integration with OpenTelemetry collectors.

Note: This is a breaking change. Update your configurations if you were using the dedicated OTel port 50052. Internal cluster communication now uses port 50052 exclusively.

Observability Improvementsโ€‹

Enhanced Dashboards: Updated Grafana and Datadog example dashboards with:

  • Snapshot monitoring widgets
  • Improved accelerated datasets section
  • Renamed ingestion lag charts for clarity

Additional Histogram Buckets: Added more buckets to histogram metrics for better latency distribution visibility.

For more details, refer to the Monitoring Documentation.

Hash Indexing for Arrow Acceleration (experimental)โ€‹

Arrow-based accelerations now support hash indexing for faster point lookups on equality predicates. Hash indexes provide O(1) average-case lookup performance for columns with high cardinality.

Features:

  • Primary key hash index support
  • Secondary index support for non-primary key columns
  • Composite key support with proper null value handling

Example configuration:

datasets:
- from: postgres:users
name: users
acceleration:
enabled: true
engine: arrow
primary_key: user_id
indexes:
'(tenant_id, user_id)': unique # Composite hash index

For more details, refer to the Hash Index Documentation.

SMB and NFS Data Connectorsโ€‹

Network-Attached Storage Connectors: New data connectors for SMB (Server Message Block) and NFS (Network File System) protocols enable direct federated queries against network-attached storage without requiring data movement to cloud object stores.

Key Features:

  • SMB Protocol Support: Connect to Windows file shares and Samba servers with authentication support
  • NFS Protocol Support: Connect to Unix/Linux NFS exports for direct data access
  • Federated Queries: Query Parquet, CSV, JSON, and other file formats directly from network storage with full SQL support
  • Acceleration Support: Accelerate data from SMB/NFS sources using DuckDB, Spice Cayenne, or other accelerators

Example spicepod.yaml configuration:

datasets:
# SMB share
- from: smb://fileserver/share/data.parquet
name: smb_data
params:
smb_username: ${secrets:SMB_USER}
smb_password: ${secrets:SMB_PASS}

# NFS export
- from: nfs://nfsserver/export/data.parquet
name: nfs_data

For more details, refer to the Data Connectors Documentation.

ScyllaDB Data Connectorโ€‹

A new data connector for ScyllaDB, the high-performance NoSQL database compatible with Apache Cassandra. Query ScyllaDB tables directly or accelerate them for faster analytics.

Example configuration:

datasets:
- from: scylladb:my_keyspace.my_table
name: scylla_data
acceleration:
enabled: true
engine: duckdb

For more details, refer to the ScyllaDB Data Connector Documentation.

Flight SQL TLS Connection Fixesโ€‹

TLS Connection Support: Fixed TLS connection issues when using grpc+tls:// scheme with Flight SQL endpoints. Added support for custom CA certificate files via the new flightsql_tls_ca_certificate_file parameter.

Developer Experience Improvementsโ€‹

  • Turso v0.3.2 Upgrade: Upgraded Turso accelerator for improved performance and reliability
  • Rust 1.91 Upgrade: Updated to Rust 1.91 for latest language features and performance improvements
  • Spice Cloud CLI: Added spice cloud CLI commands for cloud deployment management
  • Improved Spicepod Schema: Improved JSON schema generation for better IDE support and validation
  • Acceleration Snapshots: Added configurable snapshots_create_interval for periodic acceleration snapshots independent of refresh cycles
  • Tiered Caching with Localpod: The Localpod connector now supports caching refresh mode, enabling multi-layer acceleration where a persistent cache feeds a fast in-memory cache
  • GitHub Data Connector: Added workflows and workflow runs support for GitHub repositories
  • NDJSON/LDJSON Support: Added support for Newline Delimited JSON and Line Delimited JSON file formats

Additional Improvements & Bug Fixesโ€‹

  • Model Listing: New functionality to list available models across multiple AI providers
  • DuckDB Partitioned Tables: Primary key constraints now supported in partitioned DuckDB table mode
  • Post-refresh Sorting: New on_refresh_sort_columns parameter for DuckDB enables data ordering after writes
  • Improved Install Scripts: Removed jq dependency and improved cross-platform compatibility
  • Better Error Messages: Improved error messaging for bucket UDF arguments and deprecated OpenAI parameters
  • Reliability: Fixed DynamoDB IAM role authentication with new dynamodb_auth: iam_role parameter
  • Reliability: Fixed cluster executors to use scheduler's temp_directory parameter for shuffle files
  • Reliability: Initialize secrets before object stores in cluster executor mode
  • Reliability: Added page-level retry with backoff for transient GitHub GraphQL errors
  • Performance: Improved statistics for rewritten DistributeFileScanOptimizer plans
  • Developer Experience: Added max_message_size configuration for Flight service

Contributorsโ€‹

Breaking Changesโ€‹

OTel Ingestion Port Changeโ€‹

OTel ingestion has been moved to the Flight port (50051), removing the separate OTel port 50052. Port 50052 is now used exclusively for internal cluster communication. Update your configurations if you were using the dedicated OTel port.

Distributed Query Cluster Mode Requires mTLSโ€‹

Distributed query cluster mode now requires mTLS for secure communication between cluster nodes. This is a security enhancement to prevent unauthorized nodes from joining the cluster and accessing secrets.

Migration Steps:

  1. Generate certificates using spice cluster tls init and spice cluster tls add
  2. Update scheduler and executor startup commands with --node-mtls-* arguments
  3. For development/testing, use --allow-insecure-connections to opt out of mTLS

Renamed CLI Arguments:

Old NameNew Name
--cluster-mode--role
--cluster-ca-certificate-file--node-mtls-ca-certificate-file
--cluster-certificate-file--node-mtls-certificate-file
--cluster-key-file--node-mtls-key-file
--cluster-address--node-bind-address
--cluster-advertise-address--node-advertise-address
--cluster-scheduler-url--scheduler-address

Removed CLI Arguments:

  • --cluster-api-key: Replaced by mTLS authentication

Cookbook Updatesโ€‹

New ScyllaDB Data Connector Recipe: New recipe demonstrating how to use the ScyllaDB Data Connector. See ScyllaDB Data Connector Recipe for details.

New SMB Data Connector Recipe: New recipe demonstrating how to use the SMB Data Connector. See SMB Data Connector Recipe for details.

The Spice Cookbook includes 86 recipes to help you get started with Spice quickly and easily.

Upgradingโ€‹

To upgrade to v1.11.0, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.11.0 image:

docker pull spiceai/spiceai:1.11.0

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai --version 1.11.0

AWS Marketplace:

Spice is available in the AWS Marketplace.

Dependenciesโ€‹

What's Changedโ€‹

Changelogโ€‹

Spice v1.11.0-rc.3 (Jan 23, 2026)

ยท 2 min read
Viktor Yershov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.11.0-rc.3! โญ

v1.11.0-rc.3 is a patch release that includes improvements to Hash Indexing for Arrow Acceleration and fixes for TLS connections with Flight SQL endpoints.

What's New in v1.11.0-rc.3โ€‹

Hash Indexing for Arrow Acceleration (experimental)โ€‹

Arrow-based accelerations now support hash indexing for faster point lookups on equality predicates. Hash indexes provide O(1) average-case lookup performance for columns with high cardinality.

Features:

  • Primary key hash index support
  • Secondary index support for non-primary key columns
  • Composite key support with proper null value handling

Example configuration:

datasets:
- from: postgres:users
name: users
acceleration:
enabled: true
engine: arrow
primary_key: user_id
indexes:
'(tenant_id, user_id)': unique # Composite hash index

For more details, refer to the Hash Index Documentation.

Flight SQL TLS Connection Fixesโ€‹

TLS Connection Support: Fixed TLS connection issues when using grpc+tls:// scheme with Flight SQL endpoints. Added support for custom CA certificate files via the new flightsql_tls_ca_certificate_file parameter.

Contributorsโ€‹

Breaking Changesโ€‹

No breaking changes.

Cookbook Updatesโ€‹

No major cookbook updates.

The Spice Cookbook includes 86 recipes to help you get started with Spice quickly and easily.

Upgradingโ€‹

To upgrade to v1.11.0-rc.3, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:v1.11.0-rc.3 image:

docker pull spiceai/spiceai:v1.11.0-rc.3

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai --version 1.11.0-rc.3

AWS Marketplace:

Spice is available in the AWS Marketplace.

What's Changedโ€‹

Changelogโ€‹

  • Hash indexing for Arrow Acceleration by @lukekim in #8924
  • Improve validation and logging for hash indexes @lukekim in #9047
  • Fix TLS connection for grpc+tls:// Flight SQL endpoints and add custom CA certificate support @phillipleblanc in #9073

Spice v1.11.0-rc.2 (Jan 22, 2026)

ยท 24 min read
Viktor Yershov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.11.0-rc.2! โญ

v1.11.0-rc.2 is the second release candidate for advanced test of v1.11. It brings Spice Cayenne to Beta status with acceleration snapshots support, a new ScyllaDB Data Connector, upgrades to DataFusion v51, Arrow 57.2, and iceberg-rust v0.8.0. It includes significant improvements to distributed query, caching, and observability.

What's New in v1.11.0-rc.2โ€‹

Spice Cayenne Accelerator Reaches Betaโ€‹

Spice Cayenne has been promoted to Beta status with acceleration snapshots support and numerous stability improvements.

Improved Reliability:

  • Fixed timezone database issues in Docker images that caused acceleration panics
  • Resolved FuturesUnordered reentrant drop crashes
  • Fixed memory growth issues related to Vortex metrics allocation
  • Metadata catalog now properly respects cayenne_file_path location
  • Added warnings for unparseable configuration values

Example configuration with snapshots:

datasets:
- from: s3://my-bucket/data.parquet
name: my_dataset
acceleration:
enabled: true
engine: cayenne
mode: file

DataFusion v51 Upgradeโ€‹

Apache DataFusion has been upgraded to v51, bringing significant performance improvements, new SQL features, and enhanced observability.

DataFusion v51 ClickBench Performance

Performance Improvements:

  • Faster CASE Expression Evaluation: Expressions now short-circuit earlier, reuse partial results, and avoid unnecessary scattering, speeding up common ETL patterns
  • Better Defaults for Remote Parquet Reads: DataFusion now fetches the last 512KB of Parquet files by default, typically avoiding 2 I/O requests per file
  • Faster Parquet Metadata Parsing: Leverages Arrow 57's new thrift metadata parser for up to 4x faster metadata parsing

New SQL Features:

  • SQL Pipe Operators: Support for |> syntax for inline transforms
  • DESCRIBE <query>: Returns the schema of any query without executing it
  • Named Arguments in SQL Functions: PostgreSQL-style param => value syntax for scalar, aggregate, and window functions
  • Decimal32/Decimal64 Support: New Arrow types supported including aggregations like SUM, AVG, and MIN/MAX

Example pipe operator:

SELECT * FROM t
|> WHERE a > 10
|> ORDER BY b
|> LIMIT 5;

Improved Observability:

  • Improved EXPLAIN ANALYZE Metrics: New metrics including output_bytes, selectivity for filters, reduction_factor for aggregates, and detailed timing breakdowns

Arrow 57.2 Upgradeโ€‹

Spice has been upgraded to Apache Arrow Rust 57.2.0, bringing major performance improvements and new capabilities.

Arrow 57 Parquet Metadata Parsing Performance

Key Features:

  • 4x Faster Parquet Metadata Parsing: A rewritten thrift metadata parser delivers up to 4x faster metadata parsing, especially beneficial for low-latency use cases and files with large amounts of metadata
  • Parquet Variant Support: Experimental support for reading and writing the new Parquet Variant type for semi-structured data, including shredded variant values
  • Parquet Geometry Support: Read and write support for Parquet Geometry types (GEOMETRY and GEOGRAPHY) with GeospatialStatistics
  • New arrow-avro Crate: Efficient conversion between Apache Avro and Arrow RecordBatches with projection pushdown and vectorized execution support

iceberg-rust v0.8.0 Upgradeโ€‹

Spice has been upgraded to iceberg-rust v0.8.0, bringing improved Iceberg table support.

Key Features:

  • V3 Metadata Support: Full support for Iceberg V3 table metadata format
  • INSERT INTO Partitioned Tables: DataFusion integration now supports inserting data into partitioned Iceberg tables
  • Improved Delete File Handling: Better support for position and equality delete files, including shared delete file loading and caching
  • SQL Catalog Updates: Implement update_table and register_table for SQL catalog
  • S3 Tables Catalog: Implement update_table for S3 Tables catalog
  • Enhanced Arrow Integration: Convert Arrow schema to Iceberg schema with auto-assigned field IDs, _file column support, and Date32 type support

Acceleration Snapshotsโ€‹

Acceleration snapshots enable point-in-time recovery and data versioning for accelerated datasets. Snapshots capture the state of accelerated data at specific points, allowing for fast bootstrap recovery and rollback capabilities.

Key Feature Improvements in v1.11:

  • Flexible Triggers: Configure when snapshots are created based on time intervals or stream batch counts
  • Automatic Compaction: Reduce storage overhead by compacting older snapshots (DuckDB only)
  • Bootstrap Integration: Snapshots can reset cache expiry on load for seamless recovery (DuckDB with Caching refresh mode)
  • Smart Creation Policies: Only create snapshots when data has actually changed

Example configuration:

datasets:
- from: s3://my-bucket/data.parquet
name: my_dataset
acceleration:
enabled: true
engine: cayenne
mode: file
snapshots: enabled
snapshots_trigger: time_interval
snapshots_trigger_threshold: 1h
snapshots_creation_policy: on_changed

Snapshots API and CLI: New API endpoints and CLI commands for managing snapshots programmatically. List, create, and restore snapshots directly from the command line or via HTTP.

For more details, refer to the Acceleration Snapshots Documentation.

ScyllaDB Data Connectorโ€‹

A new data connector for ScyllaDB, the high-performance NoSQL database compatible with Apache Cassandra. Query ScyllaDB tables directly or accelerate them for faster analytics.

Example configuration:

datasets:
- from: scylladb:my_keyspace.my_table
name: scylla_data
acceleration:
enabled: true
engine: duckdb

For more details, refer to the ScyllaDB Data Connector Documentation.

Distributed Query Improvementsโ€‹

mTLS Verification: Cluster communication between scheduler and executors now supports mutual TLS verification for enhanced security.

Credential Propagation: Azure and GCS credentials are now automatically propagated to executors in cluster mode, enabling access to cloud storage across the distributed query cluster.

Improved Resilience:

  • Exponential backoff for scheduler disconnection recovery
  • Increased gRPC message size limit from 16MB to 100MB for large query plans
  • HTTP health endpoint for cluster executors
  • Automatic executor role inference when --scheduler-address is provided

For more details, refer to the Distributed Query Documentation.

Caching Acceleration Mode Improvementsโ€‹

The Caching Acceleration Mode introduced in v1.10.0 has received significant performance optimizations and reliability fixes in this release.

Performance Optimizations:

  • Non-blocking Cache Writes: Cache misses no longer block query responses. Data is written to the cache asynchronously after the query returns, reducing query latency for cache miss scenarios.
  • Batch Cache Writes: Multiple cache entries are now written in batches rather than individually, significantly improving write throughput for high-volume cache operations.

Reliability Fixes:

  • Correct SWR Refresh Behavior: The stale-while-revalidate (SWR) pattern now correctly refreshes only the specific entries that were accessed instead of refreshing all stale rows in the dataset. This prevents unnecessary source queries and reduces load on upstream data sources.
  • Deduplicated Refresh Requests: Fixed an issue where JSON array responses could trigger multiple redundant refresh operations. Refresh requests are now properly deduplicated.
  • Fixed Cache Hit Detection: Resolved an issue where queries that didn't include fetched_at in their projection would always result in cache misses, even when cached data was available.
  • Unfiltered Query Optimization: SELECT * queries without filters now return cached data directly without unnecessary filtering overhead.

For more details, refer to the Caching Acceleration Mode Documentation.

DynamoDB Connector Enhancementsโ€‹

  • Added JSON nesting for DynamoDB Streams
  • Proper batch deletion handling

URL Tablesโ€‹

Query data sources directly via URL in SQL without prior dataset registration. Supports S3, Azure Blob Storage, and HTTP/HTTPS URLs with automatic format detection and partition inference.

Supported Patterns:

  • Single files: SELECT * FROM 's3://bucket/data.parquet'
  • Directories/prefixes: SELECT * FROM 's3://bucket/data/'
  • Glob patterns: SELECT * FROM 's3://bucket/year=*/month=*/data.parquet'

Key Features:

  • Automatic file format detection (Parquet, CSV, JSON, etc.)
  • Hive-style partition inference with filter pushdown
  • Schema inference from files
  • Works with both SQL and DataFrame APIs

Example with hive partitioning:

-- Partitions are automatically inferred from paths
SELECT * FROM 's3://bucket/data/' WHERE year = '2024' AND month = '01'

Enable via spicepod.yml:

runtime:
params:
url_tables: enabled

Cluster Mode Async Query APIs (experimental)โ€‹

New asynchronous query APIs for long-running queries in cluster mode:

  • /v1/queries endpoint: Submit queries and retrieve results asynchronously
  • Arrow Flight async support: Non-blocking query execution via Arrow Flight protocol

Observability Improvementsโ€‹

Enhanced Dashboards: Updated Grafana and Datadog example dashboards with:

  • Snapshot monitoring widgets
  • Improved accelerated datasets section
  • Renamed ingestion lag charts for clarity

Additional Histogram Buckets: Added more buckets to histogram metrics for better latency distribution visibility.

For more details, refer to the Monitoring Documentation.

Additional Improvementsโ€‹

  • Model Listing: New functionality to list available models across multiple AI providers
  • DuckDB Partitioned Tables: Primary key constraints now supported in partitioned DuckDB table mode
  • Post-refresh Sorting: New on_refresh_sort_columns parameter for DuckDB enables data ordering after writes
  • Improved Install Scripts: Removed jq dependency and improved cross-platform compatibility
  • Better Error Messages: Improved error messaging for bucket UDF arguments and deprecated OpenAI parameters

Contributorsโ€‹

Breaking Changesโ€‹

No breaking changes.

Cookbook Updatesโ€‹

New ScyllaDB Data Connector Recipe: New recipe demonstrating how to use ScyllaDB Data Connector. See ScyllaDB Data Connector Recipe for details.

New SMB Data Connector Recipe: New recipe demonstrating how to use ScyllaDB Data Connector. See SMB Data Connector Recipe for details.

The Spice Cookbook includes 86 recipes to help you get started with Spice quickly and easily.

Upgradingโ€‹

To upgrade to v1.11.0-rc.2, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:v1.11.0-rc.2 image:

docker pull spiceai/spiceai:v1.11.0-rc.2

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

Spice is available in the AWS Marketplace.

Dependenciesโ€‹

Changelogโ€‹

Spice v1.11.0-rc.1 (Jan 6, 2026)

ยท 17 min read
Evgenii Khramkov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.11.0-rc.1! โญ

v1.11.0-rc.1 is the first release candidate for early testing of v1.11 features including Distributed Query with mTLS for enterprise-grade secure cluster communication, new SMB and NFS Data Connectors for direct network-attached storage access, Prepared Statements for improved query performance and security, Cayenne Accelerator Enhancements with Key-based deletion vectors and Amazon S3 Express One Zone support, Google LLM Support for expanded AI inference capabilities, and Spice Java SDK v0.5.0 with parameterized query support.

What's New in v1.11.0-rc.1โ€‹

Distributed Query with mTLSโ€‹

Enterprise-Grade Secure Cluster Communication: Distributed query cluster mode now enables mutual TLS (mTLS) by default for secure communication between schedulers and executors. Internal cluster communication includes highly privileged RPC calls like fetching Spicepod configuration and expanding secrets. mTLS ensures only authenticated nodes can join the cluster and access sensitive data.

Key Features:

  • Mutual TLS Authentication: All executor-to-scheduler and executor-to-executor gRPC connections on the internal cluster port (50052) are secured with mTLS, securing communication, and preventing unauthorized nodes from joining the cluster
  • Certificate Management CLI: New developer spice cluster tls init and spice cluster tls add commands for generating CA certificates and node certificates with proper SANs (Subject Alternative Names)
  • Simplified CLI Arguments: Renamed cluster arguments for clarity (--role, --scheduler-address, --node-mtls-*) with --scheduler-address implying --role executor
  • Port Separation: Public services (Flight queries, HTTP API, Prometheus metrics) remain on ports 50051, 8090, and 9090 respectively, while internal cluster services (SchedulerGrpcServer, ClusterService) are isolated on port 50052 with mTLS enforced
  • Development Mode: Use --allow-insecure-connections flag to disable mTLS requirement for local development and testing

Quick Start:

# Generate certificates for development
spice cluster tls init
spice cluster tls add scheduler1
spice cluster tls add executor1

# Start scheduler
spiced --role scheduler \
--node-mtls-ca-certificate-file ca.crt \
--node-mtls-certificate-file scheduler1.crt \
--node-mtls-key-file scheduler1.key

# Start executor
spiced --role executor \
--scheduler-address https://scheduler1:50052 \
--node-mtls-ca-certificate-file ca.crt \
--node-mtls-certificate-file executor1.crt \
--node-mtls-key-file executor1.key

For more details, refer to the Distributed Query Documentation.

SMB and NFS Data Connectorsโ€‹

Network-Attached Storage Connectors: New data connectors for SMB (Server Message Block) and NFS (Network File System) protocols enable direct federated queries against network-attached storage without requiring data movement to cloud object stores.

Key Features:

  • SMB Protocol Support: Connect to Windows file shares and Samba servers with authentication support
  • NFS Protocol Support: Connect to Unix/Linux NFS exports for direct data access
  • Federated Queries: Query Parquet, CSV, JSON, and other file formats directly from network storage with full SQL support
  • Acceleration Support: Accelerate data from SMB/NFS sources using DuckDB, Spice Cayenne, or other accelerators

Example spicepod.yaml configuration:

datasets:
# SMB share
- from: smb://fileserver/share/data.parquet
name: smb_data
params:
smb_username: ${secrets:SMB_USER}
smb_password: ${secrets:SMB_PASS}

# NFS export
- from: nfs://nfsserver/export/data.parquet
name: nfs_data

For more details, refer to the Data Connectors Documentation.

Prepared Statementsโ€‹

Improved Query Performance and Security: Spice now supports prepared statements, enabling parameterized queries that improve both performance through query plan caching and security by preventing SQL injection attacks.

Key Features:

  • Query Plan Caching: Prepared statements cache query plans, reducing planning overhead for repeated queries
  • SQL Injection Prevention: Parameters are safely bound, preventing SQL injection vulnerabilities
  • Arrow Flight SQL Support: Full prepared statement support via Arrow Flight SQL protocol

SDK Support:

SDKSupportMin VersionMethod
gospice (Go)โœ… Fullv8.0.0+SqlWithParams() with typed constructors (Int32Param, StringParam, TimestampParam, etc.)
spice-rs (Rust)โœ… Fullv3.0.0+query_with_params() with RecordBatch parameters
spice-dotnet (.NET)โŒ Not yet-Coming soon
spice-java (Java)โœ… Fullv0.5.0+queryWithParams() with typed Param constructors (Param.int64(), Param.string(), etc.)
spice.js (JavaScript)โŒ Not yet-Coming soon
spicepy (Python)โŒ Not yet-Coming soon

Example (Go):

import "github.com/spiceai/gospice/v8"

client, _ := spice.NewClient()
defer client.Close()

// Parameterized query with typed parameters
results, _ := client.SqlWithParams(ctx,
"SELECT * FROM products WHERE price > $1 AND category = $2",
spice.Float64Param(10.0),
spice.StringParam("electronics"),
)

Example (Java):

import ai.spice.SpiceClient;
import ai.spice.Param;
import org.apache.arrow.adbc.core.ArrowReader;

try (SpiceClient client = new SpiceClient()) {
// With automatic type inference
ArrowReader reader = client.queryWithParams(
"SELECT * FROM products WHERE price > $1 AND category = $2",
10.0, "electronics");

// With explicit typed parameters
ArrowReader reader = client.queryWithParams(
"SELECT * FROM products WHERE price > $1 AND category = $2",
Param.float64(10.0),
Param.string("electronics"));
}

For more details, refer to the Parameterized Queries Documentation.

Spice Cayenne Accelerator Enhancementsโ€‹

The Spice Cayenne data accelerator has been improved with several key enhancements:

  • KeyBased Deletion Vectors: Improved deletion vector support using key-based lookups for more efficient data management and faster delete operations. KeyBased deletion vectors are more memory-efficient than positional vectors for sparse deletions.
  • S3 Express One Zone Support: Store Cayenne data files in S3 Express One Zone for single-digit millisecond latency, ideal for latency-sensitive query workloads that require persistence.

Example spicepod.yaml configuration:

datasets:
- from: s3://my-bucket/data.parquet
name: fast_data
acceleration:
enabled: true
engine: cayenne
mode: file
params:
# Use S3 Express One Zone for data files
cayenne_s3express_bucket: my-express-bucket--usw2-az1--x-s3

For more details, refer to the Cayenne Documentation.

Google LLM Supportโ€‹

Expanded AI Provider Support: Spice now supports Google embedding and chat models via the Google AI provider, expanding the available LLM options for AI inference workloads alongside existing providers like OpenAI, Anthropic, and AWS Bedrock.

Key Features:

  • Google Chat Models: Access Google's Gemini models for chat completions
  • Google Embeddings: Generate embeddings using Google's text embedding models
  • Unified API: Use the same OpenAI-compatible API endpoints for all LLM providers

Example spicepod.yaml configuration:

models:
- from: google:gemini-2.0-flash
name: gemini
params:
google_api_key: ${secrets:GOOGLE_API_KEY}

embeddings:
- from: google:text-embedding-004
name: google_embeddings
params:
google_api_key: ${secrets:GOOGLE_API_KEY}

For more details, refer to the Google LLM Documentation (see docs PR #1286).

Spice Java SDK v0.5.0โ€‹

Parameterized Query Support for Java: The Spice Java SDK v0.5.0 introduces parameterized queries using ADBC (Arrow Database Connectivity), providing a safer and more efficient way to execute queries with dynamic parameters.

Key Features:

  • SQL Injection Prevention: Parameters are safely bound, preventing SQL injection vulnerabilities
  • Automatic Type Inference: Java types are automatically mapped to Arrow types (e.g., double โ†’ Float64, String โ†’ Utf8)
  • Explicit Type Control: Use the new Param class with typed factory methods (Param.int64(), Param.string(), Param.decimal128(), etc.) for precise control over Arrow types
  • Updated Dependencies: Apache Arrow Flight SQL upgraded to 18.3.0, plus new ADBC driver support

Example:

import ai.spice.SpiceClient;
import ai.spice.Param;

try (SpiceClient client = new SpiceClient()) {
// With automatic type inference
ArrowReader reader = client.queryWithParams(
"SELECT * FROM taxi_trips WHERE trip_distance > $1 LIMIT 10",
5.0);

// With explicit typed parameters for precise control
ArrowReader reader = client.queryWithParams(
"SELECT * FROM orders WHERE order_id = $1 AND amount >= $2",
Param.int64(12345),
Param.decimal128(new BigDecimal("99.99"), 10, 2));
}

Maven:

<dependency>
<groupId>ai.spice</groupId>
<artifactId>spiceai</artifactId>
<version>0.5.0</version>
</dependency>

For more details, refer to the Spice Java SDK Repository.

OpenTelemetry Improvementsโ€‹

Unified Telemetry Endpoint: OTel metrics ingestion has been consolidated to the Flight port (50051), simplifying deployment by removing the separate OTel port (50052). The push-based metrics exporter continues to support integration with OpenTelemetry collectors.

Note: This is a breaking change. Update your configurations if you were using the dedicated OTel port 50052. Internal cluster communication now uses port 50052 exclusively.

Developer Experience Improvementsโ€‹

  • Turso v0.3.2 Upgrade: Upgraded Turso accelerator for improved performance and reliability
  • Rust 1.91 Upgrade: Updated to Rust 1.91 for latest language features and performance improvements
  • Spice Cloud CLI: Added spice cloud CLI commands for cloud deployment management
  • Improved Spicepod Schema: Enhanced JSON schema generation for better IDE support and validation
  • Acceleration Snapshots: Added configurable snapshots_create_interval for periodic acceleration snapshots independent of refresh cycles
  • Tiered Caching with Localpod: The Localpod connector now supports caching refresh mode, enabling multi-layer acceleration where a persistent cache feeds a fast in-memory cache
  • GitHub Data Connector: Added workflows and workflow runs support for GitHub repositories
  • NDJSON/LDJSON Support: Added support for Newline Delimited JSON and Line Delimited JSON file formats

Additional Improvements & Bug Fixesโ€‹

  • Reliability: Fixed DynamoDB IAM role authentication with new dynamodb_auth: iam_role parameter
  • Reliability: Fixed cluster executors to use scheduler's temp_directory parameter for shuffle files
  • Reliability: Initialize secrets before object stores in cluster executor mode
  • Reliability: Added page-level retry with backoff for transient GitHub GraphQL errors
  • Performance: Improved statistics for rewritten DistributeFileScanOptimizer plans
  • Developer Experience: Added max_message_size configuration for Flight service

Contributorsโ€‹

Breaking Changesโ€‹

OTel Ingestion Port Changeโ€‹

OTel ingestion has been moved to the Flight port (50051), removing the separate OTel port 50052. Port 50052 is now used exclusively for internal cluster communication. Update your configurations if you were using the dedicated OTel port.

Distributed Query Cluster Mode Requires mTLSโ€‹

Distributed query cluster mode now requires mTLS for secure communication between cluster nodes. This is a security enhancement to prevent unauthorized nodes from joining the cluster and accessing secrets.

Migration Steps:

  1. Generate certificates using spice cluster tls init and spice cluster tls add
  2. Update scheduler and executor startup commands with --node-mtls-* arguments
  3. For development/testing, use --allow-insecure-connections to opt out of mTLS

Renamed CLI Arguments:

Old NameNew Name
--cluster-mode--role
--cluster-ca-certificate-file--node-mtls-ca-certificate-file
--cluster-certificate-file--node-mtls-certificate-file
--cluster-key-file--node-mtls-key-file
--cluster-address--node-bind-address
--cluster-advertise-address--node-advertise-address
--cluster-scheduler-url--scheduler-address

Removed CLI Arguments:

  • --cluster-api-key: Replaced by mTLS authentication

Cookbook Updatesโ€‹

No major cookbook updates.

The Spice Cookbook includes 84 recipes to help you get started with Spice quickly and easily.

Upgradingโ€‹

To try v1.11.0-rc.1, use one of the following methods:

CLI:

spice upgrade --version 1.11.0-rc.1

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.11.0-rc.1 image:

docker pull spiceai/spiceai:1.11.0-rc.1

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai --version 1.11.0-rc.1

AWS Marketplace:

๐ŸŽ‰ Spice is available in the AWS Marketplace!

What's Changedโ€‹

Changelogโ€‹

Spice v1.10.4 (Jan 5, 2026)

ยท 2 min read
Phillip LeBlanc
Co-Founder and CTO of Spice AI

Announcing the release of Spice v1.10.4! ๐Ÿ› ๏ธ

v1.10.4 is a patch release with fixes for Kafka/Debezium batch commits, ABFSS URL support for Azure Data Lake Storage Gen2, and improved column projection handling for location metadata columns.

What's New in v1.10.4โ€‹

Additional Improvements & Bug Fixesโ€‹

  • Reliability: Fixed Kafka and Debezium batch commit handling to properly commit offsets across all partitions. Previously, only the last message's offset was committed, which could cause message loss when batches contained messages from multiple partitions.
  • Reliability: Added support for abfss:// URL prefix for Azure Data Lake Storage Gen2, in addition to the existing abfs:// prefix. The abfss scheme indicates secure (TLS) connections to ADLS Gen2.
  • Reliability: Fixed column projection order mismatch when querying datasets with location metadata columns (e.g., SELECT location, day, size FROM dataset). Queries that specified columns in a different order than the schema would fail with "column types must match schema types" errors.
  • Developer Experience: Added detailed diagnostic logging for union projection pushdown optimization failures in cluster mode. When projection pushdown cannot be applied, debug-level logs now provide additional context to help identify the root cause.

Contributorsโ€‹

Breaking Changesโ€‹

No breaking changes.

Cookbook Updatesโ€‹

No major cookbook updates.

The Spice Cookbook includes 84 recipes to help you get started with Spice quickly and easily.

Upgradingโ€‹

To upgrade to v1.10.4, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.10.4 image:

docker pull spiceai/spiceai:1.10.4

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

๐ŸŽ‰ Spice is now available in the AWS Marketplace!

What's Changedโ€‹

Changelogโ€‹

Spice v1.10.3 (Dec 29, 2025)

ยท 2 min read
Phillip LeBlanc
Co-Founder and CTO of Spice AI

Announcing the release of Spice v1.10.3! ๐Ÿš€

v1.10.3 is a patch release with improved startup reliability, fixes for Azure BlobFS versioned containers, S3 custom endpoint query resolution, and a fix for the OpenAI Responses API.

What's New in v1.10.3โ€‹

Additional Improvements & Bug Fixesโ€‹

  • Reliability: Telemetry exporter initialization now runs asynchronously, preventing blocked startup in environments with network restrictions (e.g., Kubernetes with restrictive network policies).
  • Reliability: Fixed an issue where queries on Azure Blob containers with versioning enabled would fail with "Azure does not support suffix range requests" error in distributed query mode.
  • Reliability: Fixed S3 location-based queries against custom S3 endpoints (e.g., MinIO, LocalStack). Queries with location predicates on datasets using s3_endpoint and s3_region parameters now correctly route to the configured endpoint instead of defaulting to AWS S3.
  • Reliability: Fixed "project index out of bounds" errors in the query optimizer when union children have mismatched schemas. The optimizer now validates schema compatibility before applying projection pushdown.
  • Reliability: Fixed an issue where the OpenAI Responses API (/v1/responses) was not working correctly.

Contributorsโ€‹

Breaking Changesโ€‹

No breaking changes.

Cookbook Updatesโ€‹

No major cookbook updates.

The Spice Cookbook includes 84 recipes to help you get started with Spice quickly and easily.

Upgradingโ€‹

To upgrade to v1.10.3, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.10.3 image:

docker pull spiceai/spiceai:1.10.3

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

๐ŸŽ‰ Spice is now available in the AWS Marketplace!

What's Changedโ€‹

Changelogโ€‹

Spice v1.10.2 (Dec 22, 2025)

ยท 5 min read
Sergei Grebnov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.10.2! ๐Ÿ”ฅ

v1.10.2 introduces Tiered Caching Acceleration with Localpod for multi-layer acceleration architectures, Periodic Acceleration Snapshots with configurable intervals, DynamoDB JSON Nesting for column consolidation, and Kafka/Debezium Batching for faster data ingestion. This release also includes fixes for SQLite accelerator decimal/date handling and real-time status reporting for the /v1/datasets and /v1/models API endpoints.

What's New in v1.10.2โ€‹

Tiered Caching with Localpodโ€‹

Multi-Layer Acceleration Architecture: The Localpod connector now supports caching refresh mode, enabling tiered acceleration where a persistent cache (e.g., file-mode DuckDB) feeds a fast in-memory cache (e.g., Arrow, memory-mode DuckDB).

Key Features:

  • Automatic Cache Propagation: New cache entries automatically propagate from parent to child accelerators
  • Warm Startup: Child accelerators initialize from existing parent data on startup, eliminating cold-start latency
  • Flexible Tiering: Combine any accelerator engines (DuckDB, SQLite, Cayenne) across tiers

Example spicepod.yaml configuration:

datasets:
# Parent: persistent file-mode cache
- from: https://api.example.com
name: api_cache
acceleration:
enabled: true
refresh_mode: caching
engine: duckdb
mode: file

# Child: fast in-memory cache fed by parent
- from: localpod:api_cache
name: api_cache_memory
acceleration:
enabled: true
refresh_mode: caching
engine: arrow
mode: memory

For more details, refer to the Localpod Data Connector Documentation.

Periodic Acceleration Snapshotsโ€‹

Configurable Snapshot Intervals: A new snapshots_create_interval parameter enables periodic snapshot creation for accelerated datasets across all refresh modes. This provides better control over snapshot frequency and ensures consistent recovery points for accelerated data.

Example spicepod.yaml configuration:

datasets:
- from: s3://my-bucket/data.parquet
name: my_data
acceleration:
enabled: true
engine: duckdb
mode: file
refresh_mode: caching
snapshots: enabled
params:
snapshots_create_interval: 60s # Write a snapshot every 60 seconds

For more details, refer to the Data Acceleration Documentation.

DynamoDB JSON Nestingโ€‹

Consolidate Columns into JSON: The DynamoDB Data Connector now supports consolidating columns into a single JSON column using the json_object: "*" metadata option. This is useful when only a few columns are needed as discrete fields while the rest can be accessed as nested JSON.

Example spicepod.yaml configuration:

datasets:
- from: dynamodb:my_table
name: my_table
columns:
- name: PK
- name: SK
- name: data_json
metadata:
json_object: '*' # Captures all other columns as JSON

Example Output: Given a DynamoDB table with columns PK, SK, name, email, and status, the resulting table schema consolidates all non-specified columns into the data_json column:

PKSKdata_json
pk_1sort_1{"name": "Alice", "email": "[email protected]", "status": "active"}
pk_2sort_2{"name": "Bob", "email": "[email protected]", "status": "inactive"}

For more details, refer to the DynamoDB JSON Nesting Documentation.

Kafka/Debezium Batchingโ€‹

Faster Data Ingestion: Configure message batching for Kafka and Debezium connectors to improve data ingestion throughput. Batching reduces processing overhead by grouping multiple messages together before insertion.

Key Features:

  • Configurable Batch Size: Control the maximum number of records per batch (default: 10,000)
  • Configurable Batch Duration: Set the maximum wait time before flushing a partial batch (default: 1s)

Example spicepod.yaml configuration:

datasets:
- from: debezium:kafka-server.public.my_table
name: my_table
params:
batch_max_size: 10000 # Max records per batch (default: 10000)
batch_max_duration: 1s # Max wait time per batch (default: 1s)

For more details, refer to the Kafka Data Connector Documentation and Debezium Data Connector Documentation.

Additional Improvements & Bug Fixesโ€‹

  • Reliability: Fixed SQLite accelerator decimal and date type handling for improved data type accuracy.
  • Reliability: Fixed real-time status reporting for /v1/datasets and /v1/models API endpoints.
  • Reliability: Fixed Kafka warning when security.protocol is set to PLAINTEXT.

Contributorsโ€‹

Breaking Changesโ€‹

No breaking changes.

Cookbook Updatesโ€‹

New Cayenne Data Accelerator Recipe: New recipe demonstrating how to accelerate a local copy of the taxi trips dataset using Cayenne as the data accelerator engine. See Cayenne Data Accelerator Recipe for details.

New Dataset Partitioning Recipe: New recipe demonstrating how to partition accelerated datasets to improve query performance. See Dataset Partitioning for details.

The Spice Cookbook includes 84 recipes to help you get started with Spice quickly and easily.

Upgradingโ€‹

To upgrade to v1.10.2, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.10.2 image:

docker pull spiceai/spiceai:1.10.2

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

๐ŸŽ‰ Spice is now available in the AWS Marketplace!

What's Changedโ€‹

Changelogโ€‹

Spice v1.10.1 (Dec 15, 2025)

ยท 5 min read
Jack Eadie
Token Plumber at Spice AI

Announcing the release of Spice v1.10.1! ๐Ÿš€

v1.10.1 is a patch release with Cayenne accelerator improvements including configurable compression strategies and improved partition ID handling, isolated refresh runtime for better query API responsiveness, and security hardening. In addition, the GO SDK, gospice v8 has been released.

What's New in v1.10.1โ€‹

Cayenne Accelerator Improvementsโ€‹

Several improvements and bug fixes for the Cayenne data accelerator:

  • Compression Strategies: The new cayenne_compression_strategy parameter enables choosing between zstd for compact storage or btrblocks for encoding-efficient compression.
  • Improved Vortex Defaults: Aligned Cayenne to Vortex footer configuration for better compatibility.
  • Partition ID Handling: Improved partition ID generation to avoid potential locking race conditions.

Example spicepod.yaml configuration:

datasets:
- from: s3://my-bucket/data.parquet
name: my_dataset
acceleration:
enabled: true
engine: cayenne
mode: file
params:
cayenne_compression_strategy: zstd # or btrblocks (default)

For more details, refer to the Cayenne Data Accelerator Documentation.

Isolated Refresh Runtimeโ€‹

Refresh tasks now run on a separate Tokio runtime isolated from the main query API. This prevents long-running or resource-intensive refresh operations from impacting query latency and ensures the /health endpoint remains responsive during heavy refresh workloads.

Security Hardeningโ€‹

Multiple security improvements have been implemented:

  • Recursion Depth Limits: Added limits to DynamoDB and S3 Vectors integrations to prevent stack overflow from deeply nested structures, mitigating potential DoS attacks.
  • Spicepod Summary API: The GET /v1/spicepods endpoint now returns summarized information instead of full spicepod.yaml representations, preventing potential sensitive information leakage.

Additional Improvements & Bug Fixesโ€‹

  • Performance: Fixed double hashing of user supplied cache keys, improving cache lookup efficiency.
  • Reliability: Fixed idle DynamoDB Stream handling for more stable CDC operations.
  • Reliability: Added warnings when multiple partitions are defined for the same table.
  • Performance: Eagerly drop cached records for results larger than max cache size.

Spice Go SDK v8โ€‹

The Spice Go SDK has been upgraded to v8 with a cleaner API, parameterized queries, and health check methods: gospice v8.0.0.

Key Features:

  • Cleaner API: New Sql() and SqlWithParams() methods with more intuitive naming.
  • Parameterized Queries: Safe, SQL-injection-resistant queries with automatic Go-to-Arrow type inference.
  • Typed Parameters: Explicit type control with constructors like Decimal128Param, TimestampParam, and more.
  • Health Check Methods: New IsSpiceHealthy() and IsSpiceReady() methods for instance monitoring.
  • Upgraded Dependencies: Apache Arrow v18 and ADBC Go driver v1.3.0.

Example usage with a local Spice runtime:

import "github.com/spiceai/gospice/v8"

// Initialize client for local runtime
spice := gospice.NewSpiceClient()
defer spice.Close()

if err := spice.Init(
gospice.WithFlightAddress("grpc://localhost:50051"),
); err != nil {
panic(err)
}

// Parameterized query (safe from SQL injection)
reader, err := spice.SqlWithParams(
ctx,
"SELECT * FROM users WHERE id = $1 AND created_at > $2",
userId,
startTime,
)

Upgrade:

go get github.com/spiceai/gospice/[email protected]

For more details, refer to the Go SDK Documentation.

Contributorsโ€‹

Breaking Changesโ€‹

  • GET /v1/spicepods no longer returns the full spicepod.yaml JSON representation. A summary is returned instead. See #8404.

Cookbook Updatesโ€‹

No major cookbook updates.

The Spice Cookbook includes 82+ recipes to help you get started with Spice quickly and easily.

Upgradingโ€‹

To upgrade to v1.10.1, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.10.1 image:

docker pull spiceai/spiceai:1.10.1

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

๐ŸŽ‰ Spice is now available in the AWS Marketplace!

What's Changedโ€‹

Changelogโ€‹

  • Return summarized spicepods from /v1/spicepods by @phillipleblanc in #8404
  • DynamoDB tests and fixes by @lukekim in #8491
  • Use an isolated Tokio runtime for refresh tasks that is separate from the main query API by @phillipleblanc in #8504
  • fix: Avoid double hashing cache key by @peasee in #8511
  • fix: Remove unused Cayenne parameters by @peasee in #8500
  • feat: Support vortex zstd compressor by @peasee in #8515
  • Fix for idle DynamoDB Stream by @krinart in #8506
  • fix: Improve Cayenne errors, ID selection for table/partition creation by @peasee in #8523
  • Update dependencies by @phillipleblanc in #8513
  • Upgrade to gospice v8 by @lukekim in #8524
  • fix: Add recursion depth limits to prevent DoS via deeply nested data (DynamoDB + S3 Vectors) by @phillipleblanc in #8544
  • fix: Add warning when multiple partitions are defined for the same table by @peasee in #8540
  • fix: Eagerly drop cached records for results larger than max by @peasee in #8516
  • DDB Streams Integration Test + Memory Acceleration + Improved Warning by @krinart in #8520
  • fix(cluster): initialize secrets before object stores in executor by @sgrebnov in #8532
  • Show user-friendly error on empty DDB table by @krinart in #8586
  • Move 'test_projection_pushdown' to runtime-datafusion by @Jeadie in #8490
  • Fix stats for rewritten DistributeFileScanOptimizer plans by @mach-kernel in #8581