Skip to main content

Announcing Spice.ai Open Source 1.0-stable: A Portable Compute Engine for Data-Grounded AI — Now Ready for Production

· 12 min read
Luke Kim
Founder and CEO of Spice AI

🎉 Today marks the 1.0-stable release of Spice.ai Open Source—purpose-built to help enterprises ground AI in data. By unifying federated data query, retrieval, and AI inference into a single engine, Spice mitigates AI hallucinations, accelerates data access for mission-critical workloads, and makes it simple and easy for developers to build fast and accurate data-intensive applications across cloud, edge, or on-prem.

Spice.ai Open Source

Enterprise AI systems are only as good as the context they’re provided. When data is inaccessible, incomplete, or outdated, even the most advanced models can generate outputs that are inaccurate, misleading, or worse, potentially harmful. In one example, a chatbot was tricked into selling a 2024 Chevy Tahoe for $1 due to a lack of contextual safeguards. For enterprises, errors like these are unacceptable—it’s the difference between success and failure.

Retrieval-Augmented Generation (RAG) is part of the answer — but traditional RAG is only as good as the data it has access to. If data is locked away in disparate, often legacy data systems, or cannot be stitched together for accurate retrieval, you get, as Benioff puts it, "Clippy 2.0".

Benioff Post

And often, after initial Python-scripted pilots, you’re left with a new set of problems: How do you deploy AI that meets enterprise requirements for performance, security, and compliance while being cost efficient? Directly querying large datasets for retrieval is slow and expensive. Building and maintaining complex ETL pipelines requires expensive data teams that most organizations don’t have. And because enterprise data is highly sensitive, you need secure access and auditable observability—something many RAG setups don’t even consider.

Developers need a platform at the intersection of data and AI—one specifically designed to ground AI in data. A solution that unifies data query, search, retrieval, and model inference—ensuring performance, security, and accuracy so you can build AI that you and your customers can trust.

Spice.ai OSS: A portable data, AI, and retrieval engine

In March of 2024, we introduced Spice.ai Open Source, a SQL query engine to materialize and accelerate data from any database, data warehouse, or data lake so that data can be accessed wherever it lives across the enterprise — consistently fast. But that was only the start.

Building on this foundation, Spice.ai OSS unifies data, retrieval, and AI, to provide current, relevant context to mitigate AI “hallucinations” and significantly reduce incorrect outputs-just one of the many mission-critical use cases Spice.ai addresses.

Spice is a portable, single-node, compute engine built in Rust. It embeds the fastest single-node SQL query engine, DataFusion, to serve secure, virtualized data views to data-intensive apps, AI, and agents. Sub-second data query is accelerated locally using Apache Arrow, DuckDB, or SQLite.

Now at version 1.0-stable, Spice is ready for production. It’s already deployed in enterprise use at Twilio, Barracuda Networks, and NRC Health, and can be deployed anywhere—cloud-hosted, BYOC, edge, on-prem.

A diagram of the Spice.ai OSS compute-engine

Data-grounded AI

Data-grounded AI anchors models in accurate, current, and domain-specific data, rather than relying solely on pre-trained knowledge. By unifying enterprise data—across databases, data lakes, and APIs—and applying advanced ingestion and retrieval techniques, these systems dynamically incorporate real-world context at inference time without leaking sensitive information. This approach helps developers minimize hallucinations, reduce operational risk, and build trust in AI by delivering reliable, relevant outputs.

Comparison of AI without contextual data and with data-grounded AI

How does Spice.ai OSS solve data-grounding?

With Spice, models always have access to materializations of low-latency, real-time data for near-instant retrieval, minimizing data movement while enabling AI feedback so apps and agents can learn and adapt over time. For example, you can join customer records from PostgreSQL with sales data in Snowflake and logs stored in S3—all with a single SQL query or LLM function call.

Secure Compute Engine for Data-grounded AI

Spice includes an advanced suite of LLM tools including vector and hybrid search, text-to-SQL, SQL query and retrieval, data sampling, and context formatting—all purpose-built for accurate outputs.

The latest research is continually incorporated so that teams can focus on business objectives rather than trying to keep up with the incredibly fast-moving and often overwhelming space of AI.

Spice.ai OSS: The engine that makes AI work

Spice.ai OSS is a lightweight, portable runtime (single ~140 MB binary) with the capabilities of a high-speed cloud data warehouse built into a self-hostable AI inference engine, all in a single, run-anywhere package.

It's designed to be distributed and integrated at the application level, rather than being a bulky, centralized system to manage, and is often deployed as a sidecar. Whether running one Spice instance per service or one for each customer, Spice is flexible enough to fit your application architecture.

Apps and agents integrate with Spice.ai OSS via three industry-standard APIs, so that it can be adopted incrementally with minimal changes to applications.

  1. SQL Query APIs: HTTP, Arrow Flight, Arrow Flight SQL, ODBC, JDBC, and ADBC.

  2. OpenAI-Compatible APIs: HTTP APIs compatible with the OpenAI SDK, AI SDK with local model serving (CUDA/Metal accelerated), and gateway to hosted models.

  3. Iceberg Catalog REST APIs: A unified Iceberg Catalog REST API.

Spice.ai OSS architecture

Key features of Spice.ai OSS include:

  • Federated SQL Query Across Data Sources: Perform SQL queries across disparate data sources with over 25 open-source data connectors, including catalogs (Unity Catalog, Iceberg Catalog, etc), databases (PostgreSQL, MySQL, etc.), data warehouses (Snowflake, Databricks, etc.), and data lakes (e.g., S3, ABFS, MinIO, etc.).

  • Data Materialization and Acceleration: Locally materialize and accelerate data using Arrow, DuckDB, SQLite, and PostgreSQL, enabling low-latency and high-speed transactional and analytical queries. Data can be ingested via Change-Data-Capture (CDC) using Debezium, Catalog integrations, on an interval, or by trigger.

  • AI Inference, Gateway, and LLM toolset: Load and serve models like Llama3 locally, or use Spice as a gateway to hosted AI platforms including OpenAI, Anthropic, xAI, and NVidia NIM. Automatically use a purpose-built LLM toolset for data-grounded AI.

  • Enterprise Search and Retrieval: Advanced search capabilities for LLM applications, including vector-based similarity search and hybrid search across structured and unstructured data. Real-time retrieval grounds AI applications in dynamic, contextually relevant information, enabling state-of-the-art RAG.

  • LLM Memory: Enable long-term memory for LLMs by efficiently storing, retrieving, and updating context across interactions. Support real-time contextual continuity and grounding for applications that require persistent and evolving understanding.

  • LLM Evaluations: Test and boost model reliability and accuracy with integrated LLM-powered evaluation tools to assess and refine AI outputs against business objectives and user expectations.

  • Monitoring and Observability: Ensure operational excellence with telemetry, distributed tracing, query/task history, and metrics, that provide end-to-end visibility into data flows and model performance in production.

  • Deploy Anywhere; Edge-to-Cloud Flexibility: Deploy Spice as a standalone instance, Kubernetes sidecar, microservice, or scalable cluster, with the flexibility to run distributed across edge, on-premises, or any cloud environment. Spice AI offers managed, cloud-hosted deployments of Spice.ai OSS through the Spice Cloud Platform (SCP).

Real-world use-cases

Spice delivers data readiness for teams like Twilio and Barracuda, and accelerates time-to-market of data-grounded AI, such as with developers on GitHub and at NRC Health.

Here are some examples of how Spice.ai OSS solves real problems for these teams.


Twilio Logo

CDN for Databases — Twilio

Twilio datalake and database acceleration using Spice.ai Open Source

A core requirement for many applications is consistently fast data access, with or without AI. Twilio uses Spice.ai OSS as a data acceleration framework or Database CDN, staging data in object-storage that's accelerated with Spice for sub-second query to improve the reliability of critical services in its messaging pipelines. Before Spice, a database outage could result in a service outage.

"Spice opened the door to take these critical control-plane datasets and move them next to our services in the runtime path."

Peter Janovsky's profile
Peter Janovsky
Software Architect at Twilio

With Spice, Twilio has achieved:

  • Significantly Improved Query Performance: Used Spice to co-locate control-plane data in the messaging runtime, accelerated with DuckDB, to send messages with a P99 query time of < 5ms.

  • Low-Latency Multi-Tenancy Controls: Spice is integrated into the message-sending runtime to manage multi-tenancy data controls. Before, data changes required manual triggers and took hours to propagate. Now, they update automatically and reach the messaging front door within five minutes via a resilient data-availability framework.

  • Mission-Critical Reliability: Reduced reliance on queries to databases by using Spice to accelerate data in-memory locally, with automatic failover to query data directly from S3, ensuring uninterrupted service even during database downtime.

"With a simple drop in container, we are able to double our data redundancy by using Spice."

David Blum's profile
David Blum
Principal Software Engineer at Twilio

By adopting Spice.ai OSS, Twilio strengthened its infrastructure, ensuring reliable services for customers and scalable data access across its growing platform.


Barracuda Logo

Datalake Accelerator — Barracuda

Barracuda Delta Lake acceleration using Spice.ai Open Source

Barracuda uses Spice.ai OSS to modernize data access for their email archiving and audit log systems, solving two big problems: slow query performance and costly queries. Before Spice, customers experienced frustrating delays of up to two minutes when searching email archives, due to the data volume being queried.

"It's just a huge gain in responsiveness for the customer."

David Stancu's profile
David Stancu
Senior Principal Software Engineer at Barracuda

With Spice, Barracuda has achieved:

  • 100x Query Performance Improvement: Accelerated email archive queries from a P99 time of 2 minutes to 100-200 milliseconds.

  • Efficient Audit Logs: Offloaded audit logs to Parquet files in S3, queried directly by Spice.

  • Mission-Critical Reliability: Reduced load on Cassandra, improving overall infrastructure stability.

  • Significant Cost Reduction: Replaced expensive Databricks Spark queries, significantly cutting expenses while improving performance.

It just kinda spins up and it just works, which is really nice.

Darin Douglass's profile
Darin Douglass
Principal Software Engineer at Barracuda

NRC Health Logo

Data-Grounded AI apps and agents — NRC Health

NRC Health data-grounded AI using Spice.ai Open Source

NRC Health uses Spice.ai OSS to simplify and accelerate the development of data-grounded AI features, unifying data from multiple platforms including MySQL, SharePoint, and Salesforce, into secure, AI-ready data. Before Spice, scaling AI expertise across the organization to build complex RAG-based scenarios was a challenge.

"What I like the most about Spice, it's very easy to collect data from different data sources and I am able to chat with this data and do everything in one place."

Dustin Warner's profile
Dustin Warner
Director of Software Engineering at NRC Health

With Spice OSS, NRC Health has achieved:

  • Developer Productivity: Partnered with Spice in three company-wide AI hackathons to build complete end-to-end data-grounded AI features in hours instead of weeks or months.

  • Accelerated Time-to-Market: Centralized data integration and AI model serving an enterprise-ready service, accelerating time to market.

"I explored AI, embeddings, search algorithms, and features with our own database. I read a lot about this, but it was so much easier to use Spice than doing it from scratch."

Taher Ahmed's profile
Taher Ahmed
Software Engineering Manager at NRC Health

Data-Grounded AI Software Development — Spice.ai GitHub Copilot Extension

When using tools like GitHub Copilot, developers often face the hassle of switching between multiple environments to get the data they need.

The Spice.ai for GitHub Copilot Extension built on Spice.ai OSS, gives developers the ability to connect data from external sources to Copilot, grounding Copilot in relevant data not generally available in GitHub, like test data stored in a development database.

Developers can simply type @spiceai to interact with connected data, with relevant answers now surfaced directly in Copilot Chat, significantly improving productivity.

Why choose Spice.ai OSS?

Adopting Spice.ai OSS addresses real challenges in modern AI development: it grounds models in accurate, domain-specific, real-time data. With Spice, engineering teams can focus on what matters—delivering innovative, accurate, AI-powered applications and agents that work. Additionally, Spice.ai OSS is open-source under Apache 2.0, ensuring transparency and extensibility so your organization remains free to innovate without vendor lock-in.

Get started in 30 seconds

You can install Spice.ai OSS in less than a minute, on macOS, Linux, and Windows:

curl https://install.spiceai.org | /bin/bash

Or using brew:

brew install spiceai/spiceai/spice

Once installed, follow the Getting Started with Spice.ai guide to ground OpenAI chat with data from S3 in less than 2 minutes.

Looking ahead

The 1.0-stable release of Spice.ai OSS marks a major step toward accurate AI for developers. By combining data, AI, and retrieval into a unified runtime, Spice anchors AI in relevant, real-time data—helping you build apps and agents that work.

A cloud-hosted, fully managed Spice.ai OSS service is available in the Spice Cloud Platform. It’s SOC 2 Type II compliant and makes it easy to operate Spice deployments.

Beyond apps and agents, the vision for Spice is to be the best digital labor platform for building autonomous AI employees and teams. These are exciting times! Stay tuned for some upcoming announcements later in 2025!

The Spice AI Team

Learn more

  • Cookbook: 47+ samples and examples using Spice.ai OSS
  • Documentation: Learn about features, use cases, and advanced configurations
  • X: Follow @spice_ai on X for news and updates
  • Discord: Connect with the team and the community
  • GitHub: Star the repo, contribute, and raise issues

Spice v1.0-stable (Jan 20, 2025)

· 8 min read
William Croxson
Senior Software Engineer at Spice AI

🎉 After 47 releases, Spice.ai OSS has reached production readiness with the 1.0-stable milestone!

The core runtime and features such as query federation, query acceleration, catalog integration, search and AI-inference have all graduated to stable status along with key component graduations across data connectors, data accelerators, catalog connectors, and AI model providers.

Highlights in v1.0-stable

Breaking Changes

  • Default Runtime Version: The CLI will install the GPU accelerated AI-capable Runtime by default (if supported), when running spice install or spice run. To force-install the non-GPU version, run spice install ai --cpu.

  • Default OpenAI Model: The default OpenAI model has updated to gpt-4o-mini.

  • Identifier Normalization: Unquoted identifiers such as table names are no longer normalized to lowercase. Identifiers will now retain their exact case as provided.

  • Sandboxed Docker Image: The Runtime Docker Image now runs the spiced process as the nobody user in a minimal chroot sandbox.

  • Insecure S3 and ABFS endpoints: The S3 and ABFS connectors now enforce insecure endpoint checks, preventing HTTP endpoints unless allow_http is explicitly enabled. Refer to the documentation for details.

Dependencies

No major dependency changes.

Upgrading

To upgrade to v1.0.0, use one of the following methods:

CLI:

spice upgrade

Homebrew:

brew upgrade spiceai/spiceai/spice

Docker:

Pull the spiceai/spiceai:1.0.0 image:

docker pull spiceai/spiceai:1.0.0

For available tags, see DockerHub.

Helm:

helm repo update
helm upgrade spiceai spiceai/spiceai

Contributors

  • @peasee
  • @ewgenius
  • @Jeadie
  • @Sevenannn
  • @lukekim
  • @phillipleblanc
  • @sgrebnov

What's Changed

- feat: Update load test criteria, testoperator updates by @peasee in <https://github.com/spiceai/spiceai/pull/4311>
- Update helm for v1.0.0-rc.5 by @ewgenius in <https://github.com/spiceai/spiceai/pull/4313>
- Update spicepod.schema.json by @github-actions in <https://github.com/spiceai/spiceai/pull/4318>
- Bump version to v1.0.0, update SECURITY.md by @ewgenius in <https://github.com/spiceai/spiceai/pull/4314>
- Initial criteria for models, embeddings by @Jeadie in <https://github.com/spiceai/spiceai/pull/4223>
- Update benchmark snapshots by @github-actions in <https://github.com/spiceai/spiceai/pull/4321>
- Add dremio param for running load test by @Sevenannn in <https://github.com/spiceai/spiceai/pull/4315>
- Promote Databricks (mode: delta_lake) connector to stable by @Sevenannn in <https://github.com/spiceai/spiceai/pull/4328>
- Handle failed query in load test by @Sevenannn in <https://github.com/spiceai/spiceai/pull/4327>
- feat: Use load test hours for baseline query sets by @peasee in <https://github.com/spiceai/spiceai/pull/4334>
- Fix typo in 1.0.0-rc.5 release notes by @ewgenius in <https://github.com/spiceai/spiceai/pull/4329>
- feat: add testoperator data consistency by @peasee in <https://github.com/spiceai/spiceai/pull/4319>
- docs: Release DuckDB connector stable by @peasee in <https://github.com/spiceai/spiceai/pull/4335>
- Fix DocumentDB -> DynamoDB by @lukekim in <https://github.com/spiceai/spiceai/pull/4339>
- Update benchmark snapshots by @github-actions in <https://github.com/spiceai/spiceai/pull/4337>
- fix: Download hits.parquet from MinIO for benchmark by @peasee in <https://github.com/spiceai/spiceai/pull/4338>
- Update openapi.json by @github-actions in <https://github.com/spiceai/spiceai/pull/4341>
- Remove evil averages by @lukekim in <https://github.com/spiceai/spiceai/pull/4343>
- Don't run builds on non-code changes by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4344>
- Remove streaming requirement from Databricks spark Beta and Spark connector Beta by @ewgenius in <https://github.com/spiceai/spiceai/pull/4345>
- Update s3 tpcds spicepods by @ewgenius in <https://github.com/spiceai/spiceai/pull/4346>
- Explicitly set required scale factor for throughput and load tests by @ewgenius in <https://github.com/spiceai/spiceai/pull/4347>
- Fix s3 tpcds dataset name by @ewgenius in <https://github.com/spiceai/spiceai/pull/4348>
- Promote Iceberg Catalog Connector to Beta by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4350>
- Update s3 clickbench benchmark snapshots by @ewgenius in <https://github.com/spiceai/spiceai/pull/4351>
- fix: DuckDB clickbench on zero results by @peasee in <https://github.com/spiceai/spiceai/pull/4349>
- Add integration test with snapshots for databricks catalog connector by @Sevenannn in <https://github.com/spiceai/spiceai/pull/4353>
- refactor: Remove on zero results from benchmarks, add data consistency workflow by @peasee in <https://github.com/spiceai/spiceai/pull/4354>
- Fix Bug: No field named body_embedding when do vector search with refresh sql containing subset of columns by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4297>
- docs: Update roadmap by @peasee in <https://github.com/spiceai/spiceai/pull/4364>
- feat: Release accelerators stable by @peasee in <https://github.com/spiceai/spiceai/pull/4361>
- Add TPCH/TPCDS test spicepods for MySQL by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4365>
- Catch when an insecure (http) S3 and ABFS data connectors endpoint is used without specifying the `allow_http` parameter by @ewgenius in <https://github.com/spiceai/spiceai/pull/4363>
- Update ROADMAP - Iceberg catalog alpha for v1.0 by @ewgenius in <https://github.com/spiceai/spiceai/pull/4367>
- Promote databricks catalog and databricks (spark_connect) connector to beta by @Sevenannn in <https://github.com/spiceai/spiceai/pull/4369>
- Update Roadmap - Iceberg beta by @ewgenius in <https://github.com/spiceai/spiceai/pull/4373>
- Build CUDA binaries for Linux by @Jeadie in <https://github.com/spiceai/spiceai/pull/4320>
- Promote Nvidia NIM as Alpha by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4380>
- Promote xai to alpha by @Sevenannn in <https://github.com/spiceai/spiceai/pull/4381>
- Update stable criteria for object store based connectors by @ewgenius in <https://github.com/spiceai/spiceai/pull/4383>
- Testoperator: http consistency and overhead tests, fixes and ci by @ewgenius in <https://github.com/spiceai/spiceai/pull/4382>
- Promote S3 Data Connector to Stable by @ewgenius in <https://github.com/spiceai/spiceai/pull/4385>
- Download platform-supported CUDA binary version on Linux by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4356>
- Fix http consistency test workflow, add overhead workflow by @ewgenius in <https://github.com/spiceai/spiceai/pull/4387>
- feat: Add Postgres test spicepods by @peasee in <https://github.com/spiceai/spiceai/pull/4388>
- Fix typos + specific in model criteria; Make explicit alpha/beta tests for LLMS in `crates/llms/tests`. by @Jeadie in <https://github.com/spiceai/spiceai/pull/4377>
- Fix federation bug for correlated subqueries of deeply nested Dremio tables by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4389>
- Fix http overhead workflow by @ewgenius in <https://github.com/spiceai/spiceai/pull/4390>
- Tweak model tests, fix embedding input by @ewgenius in <https://github.com/spiceai/spiceai/pull/4391>
- Promote Dremio to Stable quality by @Sevenannn in <https://github.com/spiceai/spiceai/pull/4392>
- Add beta functionality tests for embedding models. by @Jeadie in <https://github.com/spiceai/spiceai/pull/4352>
- docs: Release postgres connector stable by @peasee in <https://github.com/spiceai/spiceai/pull/4398>
- Increase timeout for model response in E2E tests by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4399>
- Disable ident normalization (i.e. `SELECT MyColumn from table` works) by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4400>
- Preserve schema metadata by @ewgenius in <https://github.com/spiceai/spiceai/pull/4402>
- Make models integration tests tracing less verbose by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4403>
- Fix `cuda` feature build on Windows by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4404>
- Promote MySQL to Stable by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4406>
- docs: Release Delta Lake and Unity catalog by @peasee in <https://github.com/spiceai/spiceai/pull/4405>
- Use `gpt-4o-mini` as a default model for openai provider by @ewgenius in <https://github.com/spiceai/spiceai/pull/4410>
- Fix streaming for Openai and Anthropic by @Jeadie in <https://github.com/spiceai/spiceai/pull/4409>
- Tweak model loading and missing tool errors messages by @ewgenius in <https://github.com/spiceai/spiceai/pull/4412>
- Spice CLI: fallback to CPU build for unsupported GPU Compute Capability by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4407>
- Build Windows CUDA binaries as part of `build_and_release` workflow by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4386>
- Update docs link by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4416>
- feat: Add CPU models install escape hatch by @peasee in <https://github.com/spiceai/spiceai/pull/4419>
- Handle OpenAI API Errors by @ewgenius in <https://github.com/spiceai/spiceai/pull/4417>
- Update spice cli to use `GH_TOKEN` or `GITHUB_TOKEN` env variables when calling releases api by @ewgenius in <https://github.com/spiceai/spiceai/pull/4175>
- Implement secure sandboxing for Docker image by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4411>
- Automatically install supported CUDA binary on Windows by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4420>
- Metrics for LLMs+ embeddings by @Jeadie in <https://github.com/spiceai/spiceai/pull/4418>
- Jeadie/25 01 17/beta perf by @Jeadie in <https://github.com/spiceai/spiceai/pull/4397>
- Pass GitHub token to all CI steps calling spice run by @ewgenius in <https://github.com/spiceai/spiceai/pull/4423>
- Run the models integration tests on PRs by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4421>
- Run CUDA builds in a separate workflow by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4430>
- Promote OpenAI models and embeddings providers to RC by @ewgenius in <https://github.com/spiceai/spiceai/pull/4432>
- Update link to retrieval-augmented generation (RAG) details by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4433>
- Unity catalog should strip parameter prefix before passing parameters to delta lake factory by @Sevenannn in <https://github.com/spiceai/spiceai/pull/4436>
- Update quickstart traces to match current version by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4435>
- Update Supported Embeddings Providers Readme section by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4434>
- Local models can stream tools by @Jeadie in <https://github.com/spiceai/spiceai/pull/4429>
- fix: Use MetricsCollector::show() for HTTP testoperator commands by @peasee in <https://github.com/spiceai/spiceai/pull/4442>
- Fix run query action by @ewgenius in <https://github.com/spiceai/spiceai/pull/4444>
- Default to AI-enabled runtime for `spice run`/`spice install` by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4443>
- Change no spicepod.yaml log to warning by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4447>
- refactor: Update Catalog Connector error messages by @peasee in <https://github.com/spiceai/spiceai/pull/4441>
- Fix panic when converting OTel metrics by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4449>
- refactor: Update model errors by @peasee in <https://github.com/spiceai/spiceai/pull/4446>
- Update spiceai/mistral.rs to silence metadata logs by @ewgenius in <https://github.com/spiceai/spiceai/pull/4452>
- fix xAI; don't use openai defaults by @Jeadie in <https://github.com/spiceai/spiceai/pull/4450>
- Improves the UX of using huggingface models by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4451>
- Add GH Workflow to test `spice ai` runtime installation by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4448>
- fix: Use specific model errors where available by @peasee in <https://github.com/spiceai/spiceai/pull/4454>
- Detect and report unsupported embedding column type during dataset registration by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4456>
- Handle Errors by @Jeadie in <https://github.com/spiceai/spiceai/pull/4455>
- Catch and report negative openai_temperature error by @Sevenannn in <https://github.com/spiceai/spiceai/pull/4453>
- Clarify release check error message if it is caused by wrong GH token by @ewgenius in <https://github.com/spiceai/spiceai/pull/4458>

**Full Changelog**: <https://github.com/spiceai/spiceai/compare/v1.0.0-rc.5...v1.0.0>

Resources

Community

Spice.ai started with the vision to make AI easy for developers. We are building Spice.ai in the open and with the community. Reach out on Discord or by email to get involved.

Spice v1.0-rc.5 (Jan 13, 2025)

· 7 min read
Evgenii Khramkov
Senior Software Engineer at Spice AI

Announcing the release of Spice v1.0-rc.5 🛠️

Spice v1.0.0-rc.5 is the fifth release candidate for the first major version of Spice.ai OSS. This release focuses production readiness and critical bug fixes. In addition, a new DynamoDB data connector has been added along with automatic detection for GPU acceleration when running Spice using the CLI.

Highlights in v1.0-rc.5

  • Automatic GPU Acceleration Detection: Automatically detect and utilize GPU acceleration when running by CLI. Install AI components locally using the CLI command spice install ai. Currently supports NVIdia CUDA and Apple Metal (M-series).

  • DynamoDB Data Connector: Query AWS DynamoDB tables using SQL with the new DynamoDB Data Connector.

datasets:
- from: dynamodb:users
name: users
params:
dynamodb_aws_region: us-west-2
dynamodb_aws_access_key_id: ${secrets:aws_access_key_id}
dynamodb_aws_secret_access_key: ${secrets:aws_secret_access_key}
acceleration:
enabled: true
sql> describe users;
+----------------+-----------+-------------+
| column_name | data_type | is_nullable |
+----------------+-----------+-------------+
| created_at | Utf8 | YES |
| date_of_birth | Utf8 | YES |
| email | Utf8 | YES |
| account_status | Utf8 | YES |
| updated_at | Utf8 | YES |
| full_name | Utf8 | YES |
| ... |
+----------------+-----------+-------------+
  • File Data Connector: Graduated to Stable.

  • Dremio Data Connector: Graduated to Release Candidate (RC).

  • Spice.ai, Spark, and Snowflake Data Connectors: Graduated to Beta.

Dependencies

No major dependency changes.

Contributors

  • @Jeadie
  • @phillipleblanc
  • @ewgenius
  • @peasee
  • @Sevenannn
  • @lukekim

What's Changed

* Update acknowledgements by @github-actions in https://github.com/spiceai/spiceai/pull/4190
* Ensure non-nullity of primary keys in `MemTable`; check validity of initial data. by @Jeadie in https://github.com/spiceai/spiceai/pull/4158
* Bump version to v1.0.0 stable by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4191
* Fix metal + models download by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4193
* Update spice.ai connector beta roadmap by @ewgenius in https://github.com/spiceai/spiceai/pull/4194
* feat: verify on zero results snapshots by @peasee in https://github.com/spiceai/spiceai/pull/4195
* Add throughput test module to `test-framework` by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4196
* Update Spice.ai TPCH snapshots by @ewgenius in https://github.com/spiceai/spiceai/pull/4202
* Replace all usage of `lazy_static!` with `LazyLock` by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4199
* Fix model + metal download by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4200
* Run Clickbench for Dremio by @Sevenannn in https://github.com/spiceai/spiceai/pull/4138
* Update openapi.json by @github-actions in https://github.com/spiceai/spiceai/pull/4205
* Fix the typo in connector stable criteria by @Sevenannn in https://github.com/spiceai/spiceai/pull/4213
* feat: Add throughput test example by @peasee in https://github.com/spiceai/spiceai/pull/4214
* feat: calculate throughput test query percentiles by @peasee in https://github.com/spiceai/spiceai/pull/4215
* feat: Add throughput test to actions by @peasee in https://github.com/spiceai/spiceai/pull/4217
* Implement DynamoDB Data Connector by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4218
* 1.0 doc updates by @lukekim in https://github.com/spiceai/spiceai/pull/4181
* Improve clarity and concison of use-cases by @lukekim in https://github.com/spiceai/spiceai/pull/4220
* Remove macOS Intel build by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4221
* fix: Test operator throughput test workflow by @peasee in https://github.com/spiceai/spiceai/pull/4222
* DynamoDB: Automatically load AWS credentials from IAM roles if access key not provided by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4226
* File connector clickbench snapshots results by @ewgenius in https://github.com/spiceai/spiceai/pull/4225
* Spice.ai Catalog Connector by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4204
* feat: Add test framework metrics collection by @peasee in https://github.com/spiceai/spiceai/pull/4227
* Add badges for build/test status on README.md by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4228
* Release Dremio to RC by @Sevenannn in https://github.com/spiceai/spiceai/pull/4224
* feat: Add more test spicepods by @peasee in https://github.com/spiceai/spiceai/pull/4229
* feat: Add load test to testoperator by @peasee in https://github.com/spiceai/spiceai/pull/4231
* Add TSV format to all `object_store`-based connectors by @Jeadie in https://github.com/spiceai/spiceai/pull/4192
* Move test-framework to dev-dependencies for Runtime by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4230
* Document limitation for correlated subqueries in TPCH for Spice.ai connector by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4235
* Changes for CUDA by @Jeadie in https://github.com/spiceai/spiceai/pull/4130
* fix: Collect batches from test framework, load test updates by @peasee in https://github.com/spiceai/spiceai/pull/4234
* Suppress opentelemetry_sdk warnings - they aren't useful by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4243
* fix: Set dataset status first, update test framework by @peasee in https://github.com/spiceai/spiceai/pull/4244
* feat: Re-enable defaults on test spicepods by @peasee in https://github.com/spiceai/spiceai/pull/4248
* Add usage for streaming local models; Fix spice chat usage bar TPS expansion by @Jeadie in https://github.com/spiceai/spiceai/pull/4232
* refactor: Use composite testoperator setup, add query overrides by @peasee in https://github.com/spiceai/spiceai/pull/4246
* Enable expand_views_at_output for DF optimizer and transform schema to expanded view types by @ewgenius in https://github.com/spiceai/spiceai/pull/4237
* Add throughput test spicepod for databricks delta mode connector by @Sevenannn in https://github.com/spiceai/spiceai/pull/4241
* Spark data connector - update and enable TPCH and TPCDS benchmarks by @ewgenius in https://github.com/spiceai/spiceai/pull/4240
* Increase the timeout minutes of load test to 10 hours by @Sevenannn in https://github.com/spiceai/spiceai/pull/4254
* Improve partition column counts error for delta table by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4247
* Add e2e test for databricks catalog connector (mode: delta_lake) by @Sevenannn in https://github.com/spiceai/spiceai/pull/4255
* Spark connector integration tests by @ewgenius in https://github.com/spiceai/spiceai/pull/4256
* Run benchmark test with the new test framework by @Sevenannn in https://github.com/spiceai/spiceai/pull/4245
* Configure databricks delta secrets to run load test by @Sevenannn in https://github.com/spiceai/spiceai/pull/4257
* Support `properties` for emitted telemetry by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4249
* feat: Add `ready_wait` test operator workflow input by @peasee in https://github.com/spiceai/spiceai/pull/4259
* Handle 'LargeStringArray' for embedding tables by @Jeadie in https://github.com/spiceai/spiceai/pull/4263
* `llms` tests for alpha/beta model criteria by @Jeadie in https://github.com/spiceai/spiceai/pull/4261
* Configurable runner type for load and throughput tests by @ewgenius in https://github.com/spiceai/spiceai/pull/4262
* Handle NULL partition columns for Delta Lake tables by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4264
* Add integration test for Snowflake by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4266
* Add Snowflake TPCH queries by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4268
* Handle `LargeStringArray` in `v1/search`. by @Jeadie in https://github.com/spiceai/spiceai/pull/4265
* Fix `build_cuda` in Update spiced_docker.yml by @Jeadie in https://github.com/spiceai/spiceai/pull/4269
* Run Snowflake benchmark in GitHub Actions by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4270
* Allow Snowflake query override for CI tests by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4271
* Don't run GPU builds for trunk by @Jeadie in https://github.com/spiceai/spiceai/pull/4272
* Fix InvalidTypeAction not working by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4273
* Add xAI key to llm integration tests by @Jeadie in https://github.com/spiceai/spiceai/pull/4274
* Update openai snapshots by @Jeadie in https://github.com/spiceai/spiceai/pull/4275
* Fix federation bug for correlated subqueries by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4276
* Update end_game.md by @ewgenius in https://github.com/spiceai/spiceai/pull/4278
* Promote Snowflake to Beta by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4277
* Set version to 1.0.0-rc.5 by @ewgenius in https://github.com/spiceai/spiceai/pull/4283
* Update cargo.lock by @ewgenius in https://github.com/spiceai/spiceai/pull/4285
* Update spice.ai data connector snapshots by @ewgenius in https://github.com/spiceai/spiceai/pull/4281
* Promote the Spice.ai Data Connector to Beta by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4282
* Revert change to `integration_models__models__search__openai_chunking_response.snap` by @Jeadie in https://github.com/spiceai/spiceai/pull/4279
* Allow for a subset of build artifacts to be published to minio by @Jeadie in https://github.com/spiceai/spiceai/pull/4280
* Promote File Data Connector to Stable by @ewgenius in https://github.com/spiceai/spiceai/pull/4286
* Add Iceberg to Supported Catalogs by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4287
* Update openapi.json by @github-actions in https://github.com/spiceai/spiceai/pull/4289
* Fix Spark benchmark credentials, add back overrides by @ewgenius in https://github.com/spiceai/spiceai/pull/4295
* Promote Spark Data Connector to Beta by @ewgenius in https://github.com/spiceai/spiceai/pull/4296
* Add Dremio throughput test spicepod by @Sevenannn in https://github.com/spiceai/spiceai/pull/4233
* Add error message for invalid databricks mode parameter by @Sevenannn in https://github.com/spiceai/spiceai/pull/4299
* Fix pre-release check to look for `build` string by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4300
* Promote databricks catalog connector (mode: delta_lake) to beta by @Sevenannn in https://github.com/spiceai/spiceai/pull/4301
* Properly delegate `load_table` to Rest Catalog by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4303
* Update acknowledgements by @github-actions in https://github.com/spiceai/spiceai/pull/4302
* docs: Update ROADMAP.md by @peasee in https://github.com/spiceai/spiceai/pull/4306
* v1.0.0-rc.5 Release Notes by @ewgenius in https://github.com/spiceai/spiceai/pull/4298

**Full Changelog**: https://github.com/spiceai/spiceai/compare/v1.0.0-rc.4...v1.0.0-rc.5

Resources

Community

Spice.ai started with the vision to make AI easy for developers. We are building Spice.ai in the open and with the community. Reach out on Discord or by email to get involved.

Spice v1.0-rc.4 (Jan 6, 2025)

· 4 min read
Phillip LeBlanc
Co-Founder and CTO of Spice AI

Happy New Year 🎆!

Announcing the release of Spice v1.0-rc.4 🌟

Spice v1.0.0-rc.4 is the fourth release candidate for the first major version of Spice.ai OSS. This release continues the focus on production readiness. In addition, xAI has been added as a model provider.

Highlights in v1.0-rc.4

  • xAI Model Provider: Adds support for xAI hosted models.
models:
- from: xai:grok2-latest
name: xai
params:
xai_api_key: ${secrets:SPICE_XAI_API_KEY}
  • Spicepod Spec Version: Spicepod spec version v1 is now by default. v1beta1 will continue to work.
version: v1
kind: Spicepod
name: my_pod

Cookbook

Dependencies

No major dependency changes.

Contributors

  • @lukekim
  • @phillipleblanc
  • @peasee
  • @karifabri
  • @sgrebnov
  • @Jeadie
  • @ewgenius

What's Changed

- Update openapi.json by @github-actions in <https://github.com/spiceai/spiceai/pull/4087>
- Update Helm chart for v1.0.0-rc.3 (v0.2.2) by @lukekim in <https://github.com/spiceai/spiceai/pull/4088>
- Rev version to v1.0.0-rc.4 by @lukekim in <https://github.com/spiceai/spiceai/pull/4090>
- Update spicepod.schema.json by @github-actions in <https://github.com/spiceai/spiceai/pull/4089>
- Fix OpenAI Models Integration tests by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4084>
- fix: Update Postgres TPCDS and ClickBench queries by @peasee in <https://github.com/spiceai/spiceai/pull/4092>
- fix: Check Postgres acceleration schema on insert by @peasee in <https://github.com/spiceai/spiceai/pull/4094>
- Update v1.0.0-rc.3.md by @karifabri in <https://github.com/spiceai/spiceai/pull/4096>
- Update openapi.json by @github-actions in <https://github.com/spiceai/spiceai/pull/4093>
- First-class TSV for file data connector by @lukekim in <https://github.com/spiceai/spiceai/pull/4098>
- Allow Flight DoPut only for write api-keys by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4010>
- Only create tables `eval.runs` and `eval.results` when an eval is defined by @Jeadie in <https://github.com/spiceai/spiceai/pull/4099>
- Update Copyright year to include 2025 by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4100>
- feat: add postgres clickbench accelerator, release postgres accelerator by @peasee in <https://github.com/spiceai/spiceai/pull/4111>
- Add spice binaries with metal to releases; detect metal device in `spice install/upgrade`. by @Jeadie in <https://github.com/spiceai/spiceai/pull/4097>
- docs: Clarify connector release criteria by @peasee in <https://github.com/spiceai/spiceai/pull/4112>
- Update datafusion-federation to fix LIMIT with OFFSET handling in logical plan rewrite by @ewgenius in <https://github.com/spiceai/spiceai/pull/4115>
- Support Grok AI. by @Jeadie in <https://github.com/spiceai/spiceai/pull/4113>
- Fix `spice chat` usage bar. by @Jeadie in <https://github.com/spiceai/spiceai/pull/4119>
- Set unified max encoding and decoding message size for all flight client configurations across runtime by @ewgenius in <https://github.com/spiceai/spiceai/pull/4116>
- feat: Add the file connector as an appendable benchmark connector by @peasee in <https://github.com/spiceai/spiceai/pull/4120>
- Add `spice eval` command by @lukekim in <https://github.com/spiceai/spiceai/pull/4118>
- Support multi-level table nesting for Dremio by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4129>
- feat: run append TPCH benchmarks in workflow (Arrow, DuckDB) by @peasee in <https://github.com/spiceai/spiceai/pull/4131>
- Fix bug in Iceberg tables selecting a subset of columns by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4132>
- feat: Run append TPCDS benchmarks in workflow (Arrow, DuckDB) by @peasee in <https://github.com/spiceai/spiceai/pull/4141>
- Setup spice.ai clickbench by @ewgenius in <https://github.com/spiceai/spiceai/pull/4134>
- Data is streamed when reading from the GitHub connector (GraphQL tables) by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4142>
- Mark the GitHub Data Connector as Stable by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4143>
- Fix table quoting for Databricks Spark connector by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4145>
- Extend flight compute context for spice.ai connector with org and app names, to fix federated queries from different spice.ai data sources by @ewgenius in <https://github.com/spiceai/spiceai/pull/4144>
- Enforce Flight DoPut policies: Rate Limiting, Read Timeout, and Max Records per Batch by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4117>
- Fix bug Changes in catalog.yaml would require saving in spicepod.yaml to apply by @sgrebnov in <https://github.com/spiceai/spiceai/pull/4147>
- Update benchmark snapshots by @github-actions in <https://github.com/spiceai/spiceai/pull/4137>
- Add `test-framework` crate to contain all common benchmark, E2E, integration testing logic. by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4157>
- Fix `platform_option` variable in `build_and_release.yml`. by @Jeadie in <https://github.com/spiceai/spiceai/pull/4154>
- feat: Add Clickbench append benchmark for DuckDB and Arrow by @peasee in <https://github.com/spiceai/spiceai/pull/4160>
- Upload artifacts to Minio on build_and_release by @phillipleblanc in <https://github.com/spiceai/spiceai/pull/4159>
- feat: add on zero results benchmark by @peasee in <https://github.com/spiceai/spiceai/pull/4164>
- Update spice.ai connector tests by @ewgenius in <https://github.com/spiceai/spiceai/pull/4161>

**Full Changelog**: <https://github.com/spiceai/spiceai/compare/v1.0.0-rc.3...v1.0.0-rc.4>
```text

## Resources

- [Getting started with Spice.ai](https://docs.spiceai.org/getting-started/)
- [Documentation](https://docs.spiceai.org/)

## Community

Spice.ai started with the vision to make AI easy for developers. We are building Spice.ai in the open and with the community. Reach out on Discord or by email to get involved.

- Twitter: [@spice_ai](https://twitter.com/spice_ai)
- Discord: [https://discord.gg/kZnTfneP5u](https://discord.gg/kZnTfneP5u)
- Telegram: [Spice AI Discussion](https://t.me/spiceaichat)
- Reddit: [https://www.reddit.com/r/spiceai](https://www.reddit.com/r/spiceai)
- Email: [[email protected]](mailto:[email protected])

Spice v1.0-rc.3 (Dec 30, 2024)

· 7 min read
Luke Kim
Founder and CEO of Spice AI

Announcing the release of Spice v1.0-rc.3 🧊

Spice v1.0.0-rc.3 is the third release candidate for the first major version of Spice.ai OSS. This release continues the focus on production readiness and includes new Iceberg Catalog APIs, DuckDB improvements, and a new Iceberg Catalog Connector.

Highlights in v1.0-rc.3

  • Iceberg Catalog APIs: Spice now functions as an Iceberg Catalog provider, implementing a core subset of the Iceberg Catalog APIs. This enables Iceberg Catalog clients native discovery of datasets and schemas through Spice APIs.

  • GET /v1/namespaces - List all catalogs registered in Spice.

  • GET /v1/namespaces?parent=catalog - List schemas registered under a given catalog.

  • GET /v1/namespaces/:catalog_schema/tables - List tables registered under a given schema.

  • GET /v1/namespaces/:catalog_schema/tables/:table - Get the schema of a given table.

  • Iceberg Catalog Connector: The Iceberg Catalog Connector is a new integration to discover and query datasets from a remote Iceberg Catalog.

Example connecting to a remote Iceberg Catalog with tables stored in S3:

catalogs:
- from: iceberg:https://my-iceberg-catalog.com/v1/namespaces
name: ice
params:
iceberg_s3_access_key_id: ${secrets:ICEBERG_S3_ACCESS_KEY_ID}
iceberg_s3_secret_access_key: ${secrets:ICEBERG_S3_SECRET_ACCESS_KEY}
iceberg_s3_region: us-east-1

View the Iceberg Catalog Connector documentation for more details.

  • DuckDB Improvements: Added cosine_distance support for DuckDB-backed vector search, improved unnest nested type handling for array_element and lists, and optimized query performance.

  • SQLite Data Accelerator: Graduated to Release Candidate (RC).

  • File Data Accelerator: Graduated to Release Candidate (RC).

Breaking changes

  • API:v1/datasets/sample has been removed as it is not particularly useful, can be replicated via SQL, and via the tools endpoint POST v1/tools/:name.

Cookbook

  • New Language Model Evals Recipe showing how to measure the performance of a language model using LLM-as-Judge, configured entirely in the spice runtime.

  • New Iceberg Catalog Recipe showing how to use Spice to query Iceberg tables from an Iceberg catalog.

Dependencies

  • OpenTelemetry: Upgraded from 0.26.0 to 0.27.1
  • Go: Upgraded from 1.22 to 1.23 (CLI)

Contributors

  • @sgrebnov
  • @phillipleblanc
  • @peasee
  • @Jeadie
  • @Sevenannn
  • @lukekim
  • @ewgenius

What's Changed

- Add CI configuration for search benchmark dataset access by @sgrebnov in https://github.com/spiceai/spiceai/pull/3888
- Update acknowledgements by @github-actions in https://github.com/spiceai/spiceai/pull/3895
- Upgrade dependencies by @phillipleblanc in https://github.com/spiceai/spiceai/pull/3896
- chore: Update helm chart for RC.2 by @peasee in https://github.com/spiceai/spiceai/pull/3899
- Update spicepod.schema.json by @github-actions in https://github.com/spiceai/spiceai/pull/3903
- chore: Update MacOS test release install to macos-13 by @peasee in https://github.com/spiceai/spiceai/pull/3901
- Add usage to `spice chat` and fix `v1/models?status=true`. by @Jeadie in https://github.com/spiceai/spiceai/pull/3898
- chore: Bump versions for rc3 by @peasee in https://github.com/spiceai/spiceai/pull/3902
- docs: Update endgame with a step to verify dependencies in release notes by @peasee in https://github.com/spiceai/spiceai/pull/3897
- Ensure eval dataset input and ouput of correct length by @Jeadie in https://github.com/spiceai/spiceai/pull/3900
- `spice add/connect/dataset configure` should update spicepod, not overwrite it & upgrade to Go 1.23 by @phillipleblanc in https://github.com/spiceai/spiceai/pull/3905
- Bump opentelemetry from 0.26.0 to 0.27.1 by @dependabot in https://github.com/spiceai/spiceai/pull/3879
- Ensure trace_id is overridden for prior written spans by @Jeadie in https://github.com/spiceai/spiceai/pull/3906
- add 'role': 'assistant' for local models by @Jeadie in https://github.com/spiceai/spiceai/pull/3910
- Run tpcds benchmark for file connector by @Sevenannn in https://github.com/spiceai/spiceai/pull/3924
- Update to reference cookbook instead of quickstarts/samples by @phillipleblanc in https://github.com/spiceai/spiceai/pull/3928
- Fix/remove flaky integration tests by @phillipleblanc in https://github.com/spiceai/spiceai/pull/3930
- Implement `/v1/iceberg/namespaces` & `/v1/iceberg/config` APIs by @phillipleblanc in https://github.com/spiceai/spiceai/pull/3923
- Add script for creating tpcds parquet files and spicepod for file connector by @Sevenannn in https://github.com/spiceai/spiceai/pull/3931
- Use `utoipa` to generate openapi.json and swagger for dev by @Jeadie in https://github.com/spiceai/spiceai/pull/3927
- `fuzzy_match`, `json_match`, `includes` scorer by @Jeadie in https://github.com/spiceai/spiceai/pull/3926
- Implement `/v1/iceberg/namespaces/:namespace` by @phillipleblanc in https://github.com/spiceai/spiceai/pull/3933
- Implement `GET /v1/iceberg/namespaces/:namespace/tables` API by @phillipleblanc in https://github.com/spiceai/spiceai/pull/3934
- Add custom Spice DuckDB dialect with cosine_distance support by @sgrebnov in https://github.com/spiceai/spiceai/pull/3938
- Fix NSQL error: `all columns in a record batch must have the same length` by @sgrebnov in https://github.com/spiceai/spiceai/pull/3947
- Don't include tools use in hf test model by @Jeadie in https://github.com/spiceai/spiceai/pull/3955
- Implement `GET /v1/namespaces/{namespace}/tables/{table}` API by @phillipleblanc in https://github.com/spiceai/spiceai/pull/3940
- Update dependencies by @phillipleblanc in https://github.com/spiceai/spiceai/pull/3967
- DuckDB: add support for nested types in Lists by @sgrebnov in https://github.com/spiceai/spiceai/pull/3961
- Add script to set up clickbench for file connector by @Sevenannn in https://github.com/spiceai/spiceai/pull/3945
- docs: Add connector stable criteria by @peasee in https://github.com/spiceai/spiceai/pull/3908
- Update Roadmp Dec 23, 2024 by @lukekim in https://github.com/spiceai/spiceai/pull/3978
- Improve CI testing for OpenAPI, new tool `spiceschema`, fix broken OpenAPI stuff. by @Jeadie in https://github.com/spiceai/spiceai/pull/3948
- remove `v1/datasets/sample` by @Jeadie in https://github.com/spiceai/spiceai/pull/3981
- feat: add SQLite ClickBench benchmark by @peasee in https://github.com/spiceai/spiceai/pull/3975
- Remove feature 'llms/mistralrs' by @Jeadie in https://github.com/spiceai/spiceai/pull/3984
- Add support for 'params.spice_tools: nsql' by @Jeadie in https://github.com/spiceai/spiceai/pull/3985
- Fix integration tests - add missing `format` query parameter in /v1/status requests by @ewgenius in https://github.com/spiceai/spiceai/pull/3989
- Enhance AI tools sampling logic for robust handling of large fields by @sgrebnov in https://github.com/spiceai/spiceai/pull/3959
- Fix subquery federation by @Sevenannn in https://github.com/spiceai/spiceai/pull/3991
- Fix unnest and add DuckDB support for `array_element` by @sgrebnov in https://github.com/spiceai/spiceai/pull/3995
- Add score value snapshotting to vector similarity search tests by @sgrebnov in https://github.com/spiceai/spiceai/pull/3996
- Use Llama-3.2-3B-Instruct for Hugging Face integration testing by @sgrebnov in https://github.com/spiceai/spiceai/pull/3992
- Simplify `construct_chunk_query_sql` for DuckDB compatibility by @sgrebnov in https://github.com/spiceai/spiceai/pull/3988
- Update TPCH and TPCDS benchmarks for spice.ai connector by @ewgenius in https://github.com/spiceai/spiceai/pull/3982
- Correctly pass Hugging Face token in models integration tests by @sgrebnov in https://github.com/spiceai/spiceai/pull/3997
- Fix: `on_zero_results` causes `TransactionContext Error: Catalog write-write conflict on create with "attachment_0"` by @phillipleblanc in https://github.com/spiceai/spiceai/pull/3998
- Add DuckDB acceleration to search benchmarks by @sgrebnov in https://github.com/spiceai/spiceai/pull/4000
- Enable Postgres write via non-default `postgres-write` feature flag by @sgrebnov in https://github.com/spiceai/spiceai/pull/4004
- Allow search benchmark to write test results by @sgrebnov in https://github.com/spiceai/spiceai/pull/4008
- Make Flight DoPut atomic and commit write only on successful stream completion by @sgrebnov in https://github.com/spiceai/spiceai/pull/4002
- Create a `CatalogConnector` abstraction by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4003
- Fix `generate-openapi.yml` and add `.schema/openapi.json`. by @Jeadie in https://github.com/spiceai/spiceai/pull/3983
- Enable spice.ai tpcds bench workflow. Comment failing tpch queries. by @ewgenius in https://github.com/spiceai/spiceai/pull/4001
- feat: Add SQLite ClickBench overrides by @peasee in https://github.com/spiceai/spiceai/pull/4016
- Implement Iceberg Catalog Connector by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4053
- feat: Datafusion updates for SQLite fixes and release by @peasee in https://github.com/spiceai/spiceai/pull/4054
- docs: Add accelerator stable release criteria by @peasee in https://github.com/spiceai/spiceai/pull/4017
- Add dremio tpch / tpcds benchmark test by @Sevenannn in https://github.com/spiceai/spiceai/pull/4063
- Update docs, and make PR to `spiceai/docs` for new `openapi.json`. by @Jeadie in https://github.com/spiceai/spiceai/pull/4019
- Update openapi.json by @github-actions in https://github.com/spiceai/spiceai/pull/4065
- Fix dremio subquery rewrite by @Sevenannn in https://github.com/spiceai/spiceai/pull/4064
- Update generate-openapi.yml by @Jeadie in https://github.com/spiceai/spiceai/pull/4073
- docs: Add catalog criteria by @peasee in https://github.com/spiceai/spiceai/pull/4052
- fix `distinct_columns` in auto/nsql tool groups by @Jeadie in https://github.com/spiceai/spiceai/pull/4074
- Update openapi.json by @github-actions in https://github.com/spiceai/spiceai/pull/4075
- Update openapi.json by @github-actions in https://github.com/spiceai/spiceai/pull/4076
- Implement window_func_support_window_frame from DremioDialect by @Sevenannn in https://github.com/spiceai/spiceai/pull/4012
- Update acknowledgements by @github-actions in https://github.com/spiceai/spiceai/pull/4079
- Promote file connector to rc by @Sevenannn in https://github.com/spiceai/spiceai/pull/4080
- Add Iceberg to README by @phillipleblanc in https://github.com/spiceai/spiceai/pull/4085
- Fix '/v1/status' default format by @Jeadie in https://github.com/spiceai/spiceai/pull/4081

**Full Changelog**: https://github.com/spiceai/spiceai/compare/v1.0.0-rc.2...v1.0.0-rc.3

Resources

Community

Spice.ai started with the vision to make AI easy for developers. We are building Spice.ai in the open and with the community. Reach out on Discord or by email to get involved.