Embedding Models
Describes how embedding models are used in Spice to convert text into numerical vectors for machine learning and search applications.
Machine learning models and AI inference engines.
View all tagsDescribes how embedding models are used in Spice to convert text into numerical vectors for machine learning and search applications.
Learn how Spice evaluates, tracks, compares, and improves language model performance for specific tasks
Learn how to provide LLMs with memory
Learn how to override default LLM hyperparameters in Spice.
Learn how LLMs interact with the Spice runtime.
Learn how to configure large language models (LLMs)
Learn how to load and serve large learning models.
Spice supports loading and serving ONNX models for inference, from sources including local filesystems, Hugging Face, and the Spice.ai Cloud platform.
Learn how to use the Model Context Protocol (MCP) with Spice.
Learn how Spice can search across datasets using database-native and vector-search methods.
Learn how to update system prompts for each request with Jinja-styled templating.
Learn how Spice can perform searches using vector-based methods.
Learn how Spice can perform web search