System Prompt parameterization
Spice supports defining system prompts for Large Language Models (LLM)s in the spicepod.
Example:
models:
- name: advice
from: openai:gpt-4o
params:
system_prompt: |
Write everything in Haiku like a pirate from Australia
More than this, system prompts can use Jinja syntax to allow system prompts to be altered on each v1/chat/completion request. This involves three steps:
-
Add
parameterized_prompt: enabled
to the model. -
Use Jinja syntax in the
system_prompt
parameter for the model in the spicepods.models:
- name: advice
from: openai:gpt-4o
params:
parameterized_prompt: enabled
system_prompt: |
Write everything in {{ form }} like a {{ user.character }} from {{ user.country }} -
Provide the required variables in v1/chat/completion via the
.metadata
field.curl -X POST http://localhost:8090/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "advice",
"messages": [
{"role": "user", "content": "Where should I visit in San Francisco?"}
],
"metadata": {
"form": "haiku",
"user": {
"character": "pirate",
"country": "australia"
}
}
}'