HuggingFace
To use a model hosted on HuggingFace, specify the huggingface.co
path in the from
field and, when needed, the files to include.
Example: Load a ML model to predict taxi trips outcomes​
models:
- from: huggingface:huggingface.co/spiceai/darts:latest
name: hf_model
files:
- path: model.onnx
datasets:
- taxi_trips
Example: Load a LLM model to generate text​
models:
- from: huggingface:huggingface.co/microsoft/Phi-3.5-mini-instruct
name: phi
Example: Load a private model​
models:
- name: llama_3.2_1B
from: huggingface:huggingface.co/meta-llama/Llama-3.2-1B
params:
hf_token: ${ secrets:HF_TOKEN }
For more details on authentication, see below.
Example: Load a GGUF model​
models:
- from: huggingface:huggingface.co/lmstudio-community/Qwen2.5-Coder-3B-Instruct-GGUF
name: sloth-gguf
files:
- path: Qwen2.5-Coder-3B-Instruct-Q3_K_L.gguf
note
Only GGUF model formats require a specific file path, other varieties (e.g. .safetensors
) are inferred.