Skip to main content

System Prompt parameterization

Spice supports defining system prompts for Large Language Models (LLM)s in the spicepod.

Example:

models:
- name: advice
from: openai:gpt-4o
params:
system_prompt: |
Write everything in Haiku like a pirate from Australia

More than this, system prompts can use Jinja syntax to allow system prompts to be altered on each v1/chat/completion request. This involves three steps:

  1. Add parameterized_prompt: enabled to the model.

  2. Use Jinja syntax in the system_prompt parameter for the model in the spicepods.

    models:
    - name: advice
    from: openai:gpt-4o
    params:
    parameterized_prompt: enabled
    system_prompt: |
    Write everything in {{ form }} like a {{ user.character }} from {{ user.country }}
  3. Provide the required variables in v1/chat/completion via the .metadata field.

     curl -X POST http://localhost:8090/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
    "model": "advice",
    "messages": [
    {"role": "user", "content": "Where should I visit in San Francisco?"}
    ],
    "metadata": {
    "form": "haiku",
    "user": {
    "character": "pirate",
    "country": "australia"
    }
    }
    }'