What good prompts have in common
- Concrete persona. "Senior data engineer" not "helpful assistant".
- Clear task framing. What\'s the input, what\'s the desired output, what\'s the success criteria.
- Examples for non-trivial formats. Show, don\'t just tell.
- Positive instructions. "Always quote line numbers" beats "don\'t respond without line numbers".
- Schema for structured output. If you want JSON, give a schema or example.
- Tag-delimited sections in long prompts. XML for Claude, markdown for GPT.
FAQ
- Are these checks "right"?
- They're heuristics derived from common patterns in production failures and from Anthropic / OpenAI prompting guides. Heuristics aren't rules — many flagged patterns are fine in context.
- Why flag "helpful AI assistant" framing?
- It's near-universal boilerplate from older prompt-engineering tutorials. A specific persona ("You are a senior backend engineer reviewing pull requests") consistently beats the generic version on technical tasks.
- Why warn about negative instructions?
- Models pattern-match on what you say to do, not what you say not to do. "Don't use markdown" is less reliable than "Output plain text only". When you must use a negative, pair with a positive: "Don't X; instead, Y".
- What about prompt-injection checks?
- Not yet. Prompt injection is a runtime concern (variable substitution + user input). This tool checks the static prompt only. For injection mitigation, validate user inputs at the boundary and use distinct system / user role separation.
Related tools
- Prompt Template Builder
Compose a prompt with named variables, see the rendered output side-by-side.
- Prompt Diff Viewer
Side-by-side line-level diff for two prompt variants — see exactly what changed.
- Few-shot Examples Formatter
Drop input/output pairs, get them rendered as XML, Q&A, JSON, or markdown few-shot blocks.