Skip to main content

5 posts tagged with "iceberg"

Iceberg Catalong Connector related topics and usage

View All Tags

Spice v1.8.0 (Oct 6, 2025)

Β· 20 min read
Phillip LeBlanc
Co-Founder and CTO of Spice AI

Announcing the release of Spice v1.8.0! 🧊

Spice v1.8.0 delivers major advances in data writes, scalable vector search, and now in previewβ€”managed acceleration snapshots for fast cold starts. This release introduces write support for Iceberg tables using standard SQL INSERT INTO, partitioned S3 Vector indexes for petabyte-scale vector search, and preview of the AI SQL function for direct LLM integration in SQL. Additional improvements include improved reliability, and the v3.0.3 release of the Spice.js Node.js SDK.

What's New in v1.8.0​

Iceberg Table Write Support (Preview)​

Append Data to Iceberg Tables with SQL INSERT INTO: Spice now supports writing to Iceberg tables and catalogs using standard SQL INSERT INTO statements. This enables data ingestion, transformation, and pipeline use casesβ€”no Spark or external writer required.

  • Append-only: Initial version targets appends; no overwrite or delete.
  • Schema validation: Inserted data must match the target table schema.
  • Secure by default: Writes are only enabled for datasets or catalogs explicitly marked with access: read_write.

Example Spicepod configuration:

catalogs:
- from: iceberg:https://glue.ap-northeast-3.amazonaws.com/iceberg/v1/catalogs/111111/namespaces
name: ice
access: read_write

datasets:
- from: iceberg:https://iceberg-catalog-host.com/v1/namespaces/my_namespace/tables/my_table
name: iceberg_table
access: read_write

Example SQL usage:

-- Insert from another table
INSERT INTO iceberg_table
SELECT * FROM existing_table;

-- Insert with values
INSERT INTO iceberg_table (id, name, amount)
VALUES (1, 'John', 100.0), (2, 'Jane', 200.0);

-- Insert into catalog table
INSERT INTO ice.sales.transactions
VALUES (1001, '2025-01-15', 299.99, 'completed');

Note: Only Iceberg datasets and catalogs with access: read_write support writes. Internal Spice tables and other connectors remain read-only.

Learn more in the Iceberg Data Connector documentation.

Acceleration Snapshots for Fast Cold Starts (Preview)​

Bootstrap Managed Accelerations from Object Storage: Spice now supports managed acceleration snapshots in preview, enabling datasets accelerated with file-based engines (DuckDB or SQLite) to bootstrap from a snapshot stored in object storage (such as S3) if the local acceleration file does not exist on startup. This dramatically reduces cold start times and enables ephemeral storage for accelerations with persistent recovery.

Key features:

  • Rapid readiness: Datasets can become ready in seconds by downloading a pre-built snapshot, skipping lengthy initial acceleration.
  • Hive-style partitioning: Snapshots are organized by month, day, and dataset for easy retention and management.
  • Flexible bootstrapping: Configurable fallback and retry behavior if a snapshot is missing or corrupted.

Example Spicepod configuration:

snapshots:
enabled: true
location: s3://some_bucket/some_folder/ # Folder for storing snapshots
bootstrap_on_failure_behavior: warn # Options: warn, retry, fallback
params:
s3_auth: iam_role # All S3 dataset params accepted here

datasets:
- from: s3://some_bucket/some_table/
name: some_table
params:
file_format: parquet
s3_auth: iam_role
acceleration:
enabled: true
snapshots: enabled # Options: enabled, disabled, bootstrap_only, create_only
engine: duckdb
mode: file
params:
duckdb_file: /nvme/some_table.db

How it works:

  • On startup, if the acceleration file does not exist, Spice checks the snapshot location for the latest snapshot and downloads it.
  • Snapshots are stored as: s3://some_bucket/some_folder/month=2025-09/day=2025-09-30/dataset=some_table/some_table_<timestamp>.db
  • If no snapshot is found, a new acceleration file is created as usual.
  • Snapshots are written after each refresh (unless configured otherwise).

Supported snapshot modes:

  • enabled: Download and write snapshots.
  • bootstrap_only: Only download on startup, do not write new snapshots.
  • create_only: Only write snapshots, do not download on startup.
  • disabled: No snapshotting.

Note: This feature is only supported for file-based accelerations (DuckDB or SQLite) with dedicated files.

Why use acceleration snapshots?

  • Faster cold starts: Skip waiting for full acceleration on startup.
  • Ephemeral storage: Use fast local disks (e.g., NVMe) for acceleration, with persistent recovery from object storage.
  • Disaster recovery: Recover from federated source outages by bootstrapping from the latest snapshot.

Partitioned S3 Vector Indexes​

Efficient, Scalable Vector Search with Partitioning: Spice now supports partitioning Amazon S3 Vector indexes and scatter-gather queries using a partition_by expression in the dataset vector engine configuration. Partitioned indexes enable faster ingestion, lower query latency, and scale to billions of vectors.

Example Spicepod configuration:

datasets:
- name: reviews
vectors:
enabled: true
engine: s3_vectors
params:
s3_vectors_bucket: my-bucket
s3_vectors_index: base-embeddings
partition_by:
- 'bucket(50, PULocationID)'
columns:
- name: body
embeddings:
from: bedrock_titan
- name: title
embeddings:
from: bedrock_titan

See the Amazon S3 Vectors documentation for details.

AI SQL function for LLM Integration (Preview)​

LLMs Directly In SQL: A new asynchronous ai SQL function enables direct calls to LLMs from SQL queries for text generation, translation, classification, and more. This feature is released in preview and supports both default and model-specific invocation.

Example Spicepod model configuration:

models:
- name: gpt-4o
from: openai:gpt-4o
params:
openai_api_key: ${secrets:openai_key}

Example SQL usage:

-- basic usage with default model
SELECT ai('hi, this prompt is directly from SQL.');
-- basic usage with specified model
SELECT ai('hi, this prompt is directly from SQL.', 'gpt-4o');
-- Using row data as input to the prompt
SELECT ai(concat_ws(' ', 'Categorize the zone', Zone, 'in a single word. Only return the word.')) AS category
FROM taxi_zones
LIMIT 10;

Learn more in the SQL Reference AI documentation.

Remote Endpoint Support for Spice CLI​

Run CLI Commands Remotely: The Spice CLI now supports connecting to remote Spice instances, enabling you to run spice sql, spice search, and spice chat commands from your local machine against a remote spiced daemon or to Spice Cloud. Previously, these commands required running on the same machine as the runtime. Now, new flags allow remote execution:

  • --cloud: Connect to a Spice Cloud instance (requires --api-key).
  • --endpoint <endpoint>: Connect to a remote Spice instance via HTTP or Arrow Flight SQL (gRPC). Supports http://, https://, grpc://, or grpc+tls:// schemes.

Examples:

# Run SQL queries against a remote Spice instance
spice sql --endpoint http://remote-host:8090

# Use Spice Cloud for chat or search
spice chat --cloud --api-key <your-api-key>
spice search --cloud --api-key <your-api-key>

Supported CLI Commands:

  • spice sql --cloud / spice sql --endpoint <endpoint>
  • spice search --cloud / spice search --endpoint <endpoint>
  • spice chat --cloud / spice chat --endpoint <endpoint>

Additional Flags:

  • --headers: Pass custom HTTP headers to the remote endpoint.
  • --tls-root-certificate-file: Specify a root certificate for TLS verification.
  • --user-agent: Set a custom user agent for requests.

For more details, see the Spice CLI Command Reference.

Spice.js v3.0.3 SDK​

Spice.js v3.0.3 Released: The official Spice.ai Node.js/JavaScript SDK has been updated to v3.0.3, bringing cross-platform support, new APIs, and improved reliability for both Node.js and browser environments.

  • Modern Query Methods: Use sql(), sqlJson(), and nsql() for flexible querying, streaming, and natural language to SQL.
  • Browser Support: SDK now works in browsers and web applications, automatically selecting the optimal transport (gRPC or HTTP).
  • Health Checks & Dataset Refresh: Easily monitor Spice runtime health and trigger dataset refreshes on demand.
  • Automatic HTTP Fallback: If gRPC/Flight is unavailable, the SDK falls back to HTTP automatically.
  • Migration Guidance: v3 requires Node.js 20+, uses camelCase parameters, and introduces a new package structure.

Example usage:

import { SpiceClient } from '@spiceai/spice'

const client = new SpiceClient(apiKey)
const table = await client.sql('SELECT * FROM my_table LIMIT 10')
console.table(table.toArray())

See Spice.js SDK documentation for full details, migration tips, and advanced usage.

Additional Improvements​

  • Reliability: Improved logging, error handling, and network readiness checks across connectors (Iceberg, Databricks, etc.).
  • Vector search durability and scale: Refined logging, stricter default limits, safeguards against index-only scans and duplicate results, and always-accessible metadata for robust queryability at scale.
  • Cache behavior: Tightened cache logic for modification queries.
  • Full-Text Search: FTS metadata columns now usable in projections; max search results increased to 1000.
  • RRF Hybrid Search: Reciprocal Rank Fusion (RRF) UDTF enhancements for advanced hybrid search scenarios.

Contributors​

Breaking Changes​

This release introduces two breaking changes associated with the search observability and tooling.

Firstly, the document_similarity tool has been renamed to search. This has the equivalent change to tracing of these tool calls:

## Old: v1.7.1
>> spice trace tool_use::document_similarity
>> curl -XPOST http://localhost:8090/v1/tools/document_similarity \
-d '{
"datasets": ["my_tbl"],
"text": "Welcome to another Spice release"
}'

## New: v1.8.0
>> spice trace tool_use::search
>> curl -XPOST http://localhost:8090/v1/tools/search \
-d '{
"datasets": ["my_tbl"],
"text": "Welcome to another Spice release"
}'

Secondly, the vector_search task in runtime.task_history has been renamed to search.

Cookbook Updates​

The Spice Cookbook now includes 80 recipes to help you get started with Spice quickly and easily.


Upgrading​

To upgrade to v1.8.0, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.8.0 image:

docker pull spiceai/spiceai:1.8.0

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

πŸŽ‰ Spice is now available in the AWS Marketplace!

What's Changed​

Dependencies​

  • iceberg-rust: Upgraded to v0.7.0-rc.1
  • mimalloc: Upgraded from 0.1.47 to 0.1.48
  • azure_core: Upgraded from 0.27.0 to 0.28.0
  • Jimver/cuda-toolkit: Upgraded from 0.2.27 to 0.2.28

Changelog​

Spice v1.5.2 (Aug 11, 2025)

Β· 7 min read
Kevin Zimmerman
Principal Software Engineer at Spice AI

Announcing the release of Spice v1.5.2! πŸ› οΈ

Spice v1.5.2 introduces a new Amazon Bedrock Models Provider for converse API (Nova) compatible models, AWS Redshift support using the Postgres data connector, and Hadoop Catalog Support for Iceberg tables along with several bug fixes and improvements.

What's New in v1.5.2​

Amazon Bedrock Models Provider: Adds a new Amazon Bedrock LLM Provider. Models compatible with the Converse API (Nova) are supported.

Amazon Bedrock provides access to a range of foundation models for generative AI. Spice supports using Bedrock-hosted models by specifying the bedrock prefix in the from field and configuring the required parameters.

Supported Model IDs:

  • amazon.nova-lite-v1:0
  • amazon.nova-micro-v1:0
  • amazon.nova-premier-v1:0
  • amazon.nova-pro-v1:0

Refer to the Amazon Bedrock documentation for details on available models and cross-region inference profiles.

Example Spicepod.yaml:

models:
- from: bedrock:us.amazon.nova-lite-v1:0
name: novash
params:
aws_region: us-east-1
aws_access_key_id: ${ secrets:AWS_ACCESS_KEY_ID }
aws_secret_access_key: ${ secrets:AWS_SECRET_ACCESS_KEY }
bedrock_guardrail_identifier: arn:aws:bedrock:abcdefg012927:0123456789876:guardrail/hello
bedrock_guardrail_version: DRAFT
bedrock_trace: enabled
bedrock_temperature: 42

For more information, see the Amazon Bedrock Documentation.

AWS Redshift Support for Postgres Data Connector: Spice now supports connecting to Amazon Redshift using the PostgreSQL data connector. Redshift is a columnar OLAP database compatible with PostgreSQL, allowing you to use the same connector and configuration parameters.

To connect to Redshift, use the format postgres:schema.table in your Spicepod and set the connection parameters to match your Redshift cluster settings.

Example Spicepod.yaml:

# Example datasets for Redshift TPCH tables
datasets:
- from: postgres:public.customer
name: customer
params:
pg_host: ${secrets:PG_HOST}
pg_port: 5439
pg_sslmode: prefer
pg_db: dev
pg_user: ${secrets:PG_USER}
pg_pass: ${secrets:PG_PASS}
- from: postgres:public.lineitem
name: lineitem
params:
pg_host: ${secrets:PG_HOST}
pg_port: 5439
pg_sslmode: prefer
pg_db: dev
pg_user: ${secrets:PG_USER}
pg_pass: ${secrets:PG_PASS}

Redshift types are mapped to PostgreSQL types. See the PostgreSQL connector documentation for details on supported types and configuration.

Hadoop Catalog Support for Iceberg: The Iceberg Data and Catalog connectors now support connecting to Hadoop catalogs on filesystem (file://) or S3 object storage (s3://, s3a://). This enables connecting to Iceberg catalogs without a separate catalog provider service.

Example Spicepod.yaml:

catalogs:
- from: iceberg:file:///tmp/hadoop_warehouse/
name: local_hadoop
- from: iceberg:s3://my-bucket/hadoop_warehouse/
name: s3_hadoop

# Example datasets
- from: iceberg:file:///data/hadoop_warehouse/test/my_table_1
name: local_hadoop
- from: iceberg:s3://my-bucket/hadoop_warehouse/test/my_table_2
name: s3_hadoop

For more details, see the Iceberg Data Connector documentation and the Iceberg Catalog Connector documentation.

Parquet Reader: Optional Parquet Page Index: Fixed an issue where the Parquet reader, using arrow-rs and DataFusion, errored on files missing page indexes, despite the Parquet spec allowing optional indexes. The Spice team contributed optional page index support to arrow-rs (PR #6) and configurable handling in DataFusion (PR #93). A new runtime parameter, parquet_page_index, makes Parquet Page Indexes configurable in Spice:

runtime:
params:
parquet_page_index: required # Options: required, skip, auto
  • required: (Default) Errors if page indexes are absent.
  • skip: Ignores page indexes, potentially reducing query performance.
  • auto: Uses page indexes if available; skips otherwise.

This improves compatibility and query flexibility for Parquet datasets.

Contributors​

Breaking Changes​

Amazon S3 Vectors Vector Engine: Amazon S3 Vectors is currently a preview AWS service. A recent update to the Amazon S3 Vectors service API introduced a breaking change that affects the integration when projecting (selecting) the embedding column. This results in the following error:

Json error: whilst decoding field 'data': expected [ got nullReceived only partial JSON payload from QueryVectors

The issue is expected to be resolved in the next release of Spice. A current workaround is to limit queries to non-embedding columns.

i.e. instead of:

SELECT url, title, scored, body_embedding
FROM vector_search(pulls, 'bugs in DuckDB', 4)
WHERE state = 'OPEN'
ORDER BY score DESC
LIMIT 4;

Remove the *_embedding column from the projection. E.g.

SELECT url, title, scored
FROM vector_search(pulls, 'bugs in DuckDB', 4)
WHERE state = 'OPEN'
ORDER BY score DESC
LIMIT 4;

This issue and workaround also applies to SELECT * FROM vector_search(..). E.g.

SELECT *
FROM vector_search(pulls, 'bugs in DuckDB', 4)
WHERE state = 'OPEN'
ORDER BY score DESC
LIMIT 4;

Cookbook Updates​

The Spice Cookbook includes 75 recipes to help you get started with Spice quickly and easily.

Upgrading​

To upgrade to v1.5.2, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.5.2 image:

docker pull spiceai/spiceai:1.5.2

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

AWS Marketplace:

πŸŽ‰ Spice is also now available in the AWS Marketplace!

What's Changed​

Dependencies​

No major dependency updates.

Changelog​

  • fixes for databricks OpenAI compatibility (#6629) by @Jeadie in #6629
  • Update spicepod.schema.json (#6632) by @app/github-actions in #6632
  • Remove 'stream_options' from databricks LLMs (#6637) by @Jeadie in #6637
  • Move retry and rate limiting logic for Amazon bedrock out of embeddings. (#6626) by @Jeadie in #6626
  • Disable Metal precomplation in integration_llms.yml (#6649) by @Jeadie in #6649
  • fix: Hadoop integration test (#6660) by @peasee in #6660
  • feat: Add Hadoop Catalog Data Component (#6658) by @peasee in #6658
  • update datafusion-table-providers to latest spiceai tag (#6661) by @mach-kernel in #6661
  • feat: Add Hadoop Catalog connectors for Iceberg (#6659) by @peasee in #6659
  • Make FullTextSearchExec robust to RecordBatch column ordering. (#6675) by @Jeadie in #6675
  • Make 'runtime-object-store' crate (#6674) by @Jeadie in #6674
  • fix: Support include for Iceberg (#6663) by @peasee in #6663
  • feat: Add Hadoop TPCH benchmark (#6678) by @peasee in #6678
  • feat: Add Hadoop metadata_path parameter (#6680) by @peasee in #6680
  • fix: Automatically infer Hadoop warehouse scheme (#6681) by @peasee in #6681
  • Amazon Bedrock, specifically Nova models (#6673) by @Jeadie in [#6673](https://github.com/spiceai/spiceai/pull/6673
  • fix perplexity_auth_token parameters for web_search (#6685) by @Jeadie in #6685
  • Fix AWS Auth issue (#6699) by @Advayp in #6699
  • Limit Concurrent Requests for GitHub (#6672) by @Advayp in #6672
  • Add runtime parameter to enable more permissive parquet reading when page indexes are missing (#6716) by @phillipleblanc in #6716
  • Improve Flight REPL error messages (#6696) by @lukekim in #6696
  • Fixes from search tests (#6710) by @Jeadie in #6710

Spice v1.0.6 (Mar 17, 2025)

Β· 4 min read
Sergei Grebnov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.0.6 ⚑

Spice v1.0.6 improves stability for DuckDB acceleration, Iceberg Data/Catalog connector improvements when using AWS Glue, and fixes an issue with the ready_state: on_registration federation fallback when using DuckDB. In addition, redundant data refreshes on startup are avoided for accelerations with persistent data.

Highlights in v1.0.6​

  • Iceberg Data/Catalog Connector Improvements: Improves Iceberg data & catalog connector reliability, including bug fixes for AWS Glue API rate-limiting and compatibility, REST API pagination support, explicit AWS credential handling, and support for AWS STS role assumption.

  • Fixes On-Registration Fallback when using DuckDB: Previously, when using DuckDB as a data accelerator and the ready_state: on_registration configuration, queries made during the initial data refresh did not properly fallback to the federated source. This is now fixed.

  • DuckDB downgraded for Stability: DuckDB has been downgraded to v1.1.3 due to a regression in memory handling tracked by duckdb/duckdb issue #16640. Once resolved and validated, Spice will re-upgrade to v1.2.x.

  • Expanded Integration Tests: Additional integration tests covering federated accelerator behavior and graceful shutdown processes have been added.

  • Optimized Data Refresh for Persistent Accelerations: Changed behavior in v1.0.6. When using persistent (file-mode) acceleration without a defined refresh interval, Spice performs a full refresh at startup only if no previously accelerated data is available. This ensures efficient startup behavior by avoiding unnecessary refreshes. This logic applies only to full refreshes when no refresh interval is specified.

To maintain the previous behavior and always refresh on every startup, set:

acceleration:
refresh_on_startup: always

Contributors​

  • @peasee
  • @phillipleblanc
  • @sgrebnov
  • @lukekim
  • @Sevenannn

Breaking Changes​

Starting from v1.0.6 when using persistent (file-mode) acceleration without a defined refresh interval, Spice performs a full refresh at startup only if no previously accelerated data is available. To maintain the previous behavior and always refresh on every startup, set:

acceleration:
refresh_on_startup: always

Cookbook Updates​

No new recipes.

Upgrading​

To upgrade to v1.0.6, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.0.6 image:

docker pull spiceai/spiceai:1.0.6

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

What's Changed​

Dependencies​

Changelog​

  • Implement proper ready_state: on_registration for federation enabled accelerators by @phillipleblanc in #5019
  • Add indexes and primary keys mismatch detection for DuckDB Acceleration by @sgrebnov in #5045
  • Add comprehensive integration tests for the ready_state behavior by @phillipleblanc in #5042
  • Add test Spicepod for acceleration with constraints by @sgrebnov in #4891
  • Add test Spicepod for DuckDB append acceleration with constraints by @sgrebnov in #4898
  • Add DuckDB graceful shutdown test to E2E CI tests by @sgrebnov in #5047
  • Update duckdb_append_with_pk_and_indexes.yaml (work for duckdb 1.1.x) by @sgrebnov in #5067
  • fix: Downgrade to DuckDB 1.1.3 by @peasee in #5055
  • fix: Acceleration federation integration test by @peasee in #5070
  • Improvements to Iceberg Catalog/Data Connector by @phillipleblanc in #5071
  • Add Results-Cache-Status to indicate query result came from cache by @phillipleblanc in #4809
  • fix: Spice.ai schema inference by @peasee in #4674
  • Add refresh_on_startup Spicepod configuration param by @phillipleblanc and @sgrebnov in #5086
  • Test restart behavior of DuckDB file acceleration against glue iceberg table by @Sevenannn #5075
  • Run Iceberg Data Connector - DuckDB File mode integration test by @Sevenannn #5069
  • Integration test for glue iceberg catalog by @Sevenannn #5077

Full Changelog: https://github.com/spiceai/spiceai/compare/v1.0.5...v1.0.6

Spice v1.0.5 (Mar 11, 2025)

Β· 4 min read
Sergei Grebnov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.0.5 🧊

Spice v1.0.5 expands Iceberg support with the introduction of the Iceberg Data Connector, in addition to the existing Iceberg Catalog Connector. This new connector enables direct dataset creation and configuration for specific Iceberg objects, enabling federated and accelerated SQL queries on Apache Iceberg tables.

Performance improvements include object-store optimized Parquet pruning in append mode, where object-store metadata is now leveraged alongside Hive partitioning to optimize file pruning. This results in faster and more efficient queries.

DuckDB has been upgraded to v1.2.0, along with additional stability improvements, including improved graceful shutdown and the ability to configure the DuckDB memory limit.

Additional updates include support for the Arrow Map type.

Highlights in v1.0.5​

  • New Iceberg Data Connector: Enables direct dataset creation and querying of Iceberg tables.

    Example usage in spicepod.yaml:

    datasets:
    - from: iceberg:https://iceberg-catalog-host.com/v1/namespaces/my_namespace/tables/my_table
    name: my_table
    params:
    # Same as Iceberg Catalog Connector
    acceleration:
    enabled: true

    For detailed setup instructions, authentication options, and configuration parameters, refer to the Iceberg Data Connector documentation.

  • Improved Parquet pruning in append mode: Uses object-store metadata for more efficient file pruning.

  • DuckDB upgrade to v1.2.0 with improved graceful shutdown: Read the DuckDB v1.2.0 announcement for details, including breaking changes for map and list_reduce. Graceful shutdown of DuckDB has been improved for better stability across restarts.

  • Configurable DuckDB memory limit: Use the duckdb_memory_limit parameter to set the DuckDB acceleration memory limit:

    - from: spice.ai:path.to.my_dataset
    name: my_dataset
    acceleration:
    params:
    duckdb_memory_limit: '2GB'
    enabled: true
    engine: duckdb
    mode: file

Contributors​

  • @peasee
  • @phillipleblanc
  • @sgrebnov
  • @lukekim

Breaking Changes​

Upgrading​

To upgrade to v1.0.5, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.0.5 image:

docker pull spiceai/spiceai:1.0.5

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

What's Changed​

Dependencies​

Changelog​

  • fix: Update OpenAI model health check by @peasee in #4849
  • fix: Allow metrics endpoint setting in CLI by @peasee in #4939
  • DuckDB acceleration: fix Decimal with zero scale support by @sgrebnov in #4922
  • Introduce runtime shutdown state by @sgrebnov in #4917
  • Add support for Flight and HTTP endpoints configuration to Spice CLI (run and sql) by @sgrebnov and @lukekim in #4913
  • Fix Datafusion resources deallocation during shutdown by @sgrebnov in #4912
  • DuckDB: fix error handling during record batch insertion by @sgrebnov in #4894
  • DuckDB: add support for Map Arrow type for DuckDB acceleration by @sgrebnov in #4887
  • Upgrade to DuckDB v1.2.0 by @sgrebnov in #4842
  • Gracefully shutdown the runtime and deallocate static resources by @sgrebnov in #4879
  • Implement an Iceberg Data Connector by @phillipleblanc in #4941
  • Don't trace canceled dataset refresh during runtime termination by @sgrebnov in #4958
  • Use metadata column last_modified when specified as a time_column by @phillipleblanc in #4970
  • Add duckdb_memory_limit param support for DuckDB acceleration by @sgrebnov in #4971
  • Add Iceberg dataset integration test by @phillipleblanc in #4950

Full Changelog: https://github.com/spiceai/spiceai/compare/v1.0.4...v1.0.5

Spice v1.0.1 (Jan 27, 2025)

Β· 5 min read
Qianqian Liu
Software Engineer at Spice AI

Spice v1.0.1 focuses on an improved developer experience, with automatic CUDA GPU detection for local models, in addition to bug fixes. Notably, the Iceberg Catalog Connector now supports AWS Glue including Sig v4 authentication.

Highlights in v1.0.1​

  • AWS Glue Support for Iceberg Catalog Connector: The Iceberg Catalog Connector now supports AWS Glue. Example spicepod.yaml configuration:
- from: iceberg:https://glue.ap-northeast-2.amazonaws.com/iceberg/v1/catalogs/123456789012/namespaces
name: glue
  • spice upgrade CLI Command: The spice upgrade CLI command detects more edge cases for a smoother upgrade experience.

  • GPU Acceleration Detection: The Spice CLI now automatically detects and enables CUDA (NVIDIA GPUs) GPU acceleration when supported in addition to Metal (M-Series on macOS).

  • Python SDK: The Python SDK (spicepy) has updated to v3.0.0, aligning the SDK with the Runtime

Breaking changes​

No breaking changes.

Dependencies​

No major dependency changes.

Cookbook​

Upgrading​

To upgrade to v1.0.1, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.0.1 image:

docker pull spiceai/spiceai:1.0.1

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

Contributors​

  • @Jeadie
  • @phillipleblanc
  • @ewgenius
  • @peasee
  • @Sevenannn
  • @sgrebnov
  • @lukekim

What's Changed​

- Update acknowledgements by @github-actions in https://github.com/spiceai/spiceai/pull/4459
- docs: 1.0 release notes by @peasee in https://github.com/spiceai/spiceai/pull/4440
- Create a release-only workflow that uses a previous run's artifacts by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4461
- Add publish-only CUDA workflow by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4462
- Fix the CUDA release workflow by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4463
- docs: Update SECURITY.md for stable by @peasee in https://github.com/spiceai/spiceai/pull/4465
- docs: Update endgame by @peasee in https://github.com/spiceai/spiceai/pull/4460
- docs: Promote HF and File model components by @peasee in https://github.com/spiceai/spiceai/pull/4457
- fix: E2E test release installation by @peasee in https://github.com/spiceai/spiceai/pull/4466
- Fix publish part of CUDA workflow by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4467
- Fix broken docs links in README by @ewgenius in https://github.com/spiceai/spiceai/pull/4468
- Update benchmark snapshots by @github-actions in https://github.com/spiceai/spiceai/pull/4474
- Update openapi.json by @github-actions in https://github.com/spiceai/spiceai/pull/4477
- Add instruction to force-install CPU runtime to v1.0 release notes by @sgrebnov in https://github.com/spiceai/spiceai/pull/4469
- feat: Add WIP testoperator dispatch workflow by @peasee in https://github.com/spiceai/spiceai/pull/4478
- Fix Bug: invalid REPL cursor position on Windows by @sgrebnov in https://github.com/spiceai/spiceai/pull/4480
- feat: Download latest spiced commit for testoperators by @peasee in https://github.com/spiceai/spiceai/pull/4483
- Add compute engine image by @lukekim in https://github.com/spiceai/spiceai/pull/4486
- fix: Testoperator git fetch depth by @peasee in https://github.com/spiceai/spiceai/pull/4484
- feat: New spicepods, testoperator improvements, TPCDS Q1 fix by @peasee in https://github.com/spiceai/spiceai/pull/4475
- Add 87 CUDA compatiblity to build CI by @Jeadie in https://github.com/spiceai/spiceai/pull/4489
- Use OpenAI golang client in `spice chat` by @Jeadie in https://github.com/spiceai/spiceai/pull/4491
- Verify `search` and `chat` on Windows as part of AI installation tests by @sgrebnov in https://github.com/spiceai/spiceai/pull/4492
- feat: Add testoperator dispatch command by @peasee in https://github.com/spiceai/spiceai/pull/4479
- Run CUDA builds on non-GPU instances by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4496
- Use upgraded spice cli when performing runtime upgrade in spice upgrade by @Sevenannn in https://github.com/spiceai/spiceai/pull/4490
- Revert "Use OpenAI golang client in `spice chat` (#4491)" by @Jeadie in https://github.com/spiceai/spiceai/pull/4532
- Make Anthropic rate limit error message friendlier by @sgrebnov in https://github.com/spiceai/spiceai/pull/4501
- Update supported CUDA targets: add 87(cli), remove 75 by @sgrebnov in https://github.com/spiceai/spiceai/pull/4509
- Support AWS Glue for Iceberg catalog connector by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4517
- Package CUDA runtime libraries into artifact for Windows by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4497

**Full Changelog**: https://github.com/spiceai/spiceai/compare/v1.0.0...v1.0.1

Resources​

Community​

Spice.ai started with the vision to make AI easy for developers. We are building Spice.ai in the open and with the community. Reach out on Discord or by email to get involved.