The shape differences in plain English
OpenAI: { messages: [{ role, content }, ...] }. System prompt is just a message with role: "system". Tool calls live in the assistant message as a tool_calls array.
Anthropic: { system, messages: [{ role, content }, ...] }. System prompt is hoisted to the top level. Only user and assistant roles in the messages array. Tool definitions live alongside tools at the top level; tool results are user messages with structured content.
Gemini: { systemInstruction, contents: [{ role, parts: [...] }] }. The role is "model" instead of "assistant". Each message is a sequence of parts — text, inline data, function calls — rather than one content string.
FAQ
- Why are these formats different?
- Each provider made independent choices. OpenAI keeps the system prompt as a regular message with
role: "system". Anthropic surfaces it as a top-levelsystemfield. Gemini usessystemInstruction+ a different message-array shape (contentswithparts). - What about tool / function calling?
- OpenAI and Anthropic are both translatable here. Gemini's tool-calling is structurally different (functionCalls live inside parts) — the converter preserves text but not Gemini tool-call semantics. For full fidelity tool conversion, see JSON Schema → Anthropic Tool.
- Is provider-specific metadata preserved?
- No. Anthropic cache breakpoints, OpenAI logit bias, Gemini safety settings, and similar tuning knobs are stripped. Add them back manually after conversion.
- Can I migrate a chat history reliably?
- For text-only conversations, yes. For conversations involving file attachments, vision inputs, or audio, the assets reference different shapes and you need provider-specific handling.
Common pitfalls
- Forgetting that Anthropic doesn't accept a system message in the messages array — paste it as the top-level system field.
- Sending
role: "assistant"to Gemini — it ignores the message. Use"model". - Assuming tool / function payload shapes are interchangeable. They're not.
- Losing prompt-caching markers when converting Anthropic → OpenAI; OpenAI's caching is implicit and works differently.