Why templating matters
Most production LLM features assemble prompts at runtime: system instructions + user query + RAG context + few-shot examples. Hand-concatenating strings drifts — a stray period, a missed variable, an extra newline — and the model\'s behaviour shifts unpredictably.
Centralising the template + variable substitution means: changes to the template are tracked in one place, missing variables are caught before runtime, and you can swap variable values to A/B test without rewriting code.
FAQ
- Is this a Jinja / Mustache compatible engine?
- Subset. We support
{{ name }}substitution only — no loops, conditionals, filters, or whitespace control. For full templating use a real engine (Jinja2 in Python, Handlebars/Nunjucks in JS). - Why does the rendered prompt show <code>{{foo}}</code> if I forgot to declare <code>foo</code>?
- On purpose — passing an un-substituted placeholder to the model is almost always a bug, and seeing the literal in the rendered output makes it impossible to miss. The "Missing variables" hint surfaces it explicitly too.
- Should I use markdown headers or XML tags inside the template?
- XML for Claude (use Anthropic XML Prompt Builder); markdown for GPT and Gemini. For cross-provider prompts, plain markdown is safest.
Common pitfalls
- Substituting unsanitised user input into a system-level template — opens prompt injection.
- Trailing whitespace in variable values that disrupts formatting.
- Hard-coding the prompt as a string literal in code, then editing in 4 different places when the requirement changes.
Related tools
- Prompt Diff Viewer
Side-by-side line-level diff for two prompt variants — see exactly what changed.
- System Prompt Analyzer
Static analysis catching common prompt anti-patterns and surfacing token counts.
- Few-shot Examples Formatter
Drop input/output pairs, get them rendered as XML, Q&A, JSON, or markdown few-shot blocks.