Back to Blog
Prompt Engineering Prompt Engineering Claude OpenAI LLM Best Practices

Prompt Engineering: A Complete Professional Guide for Claude and GPT

Prompt engineering is not a trick. It is a disciplined practice that determines whether your AI application is reliable or embarrassing. This guide covers the fundamentals, the model-specific techniques, and the production patterns that actually work.

5 April 2026 12 min readFindCoder Team
Prompt Engineering: A Complete Professional Guide for Claude and GPT

Prompt engineering is the discipline of writing instructions that reliably produce the output you want from a large language model. It is not magical incantations, and it is not going away. Even as models get smarter, the teams that get the most from them are the ones who write prompts deliberately — because a well-written prompt is cheaper, faster, and more reliable than any model upgrade.

This guide walks through what prompt engineering actually is, the principles that apply across all models, and the techniques that are specific to Claude and OpenAI.

What Prompt Engineering Actually Is

A prompt is the full input the model sees when it generates a response. That includes the system prompt, the conversation history, any retrieved documents, tool definitions, and the user's latest message. Prompt engineering is about shaping every part of that input so the model has exactly the information, structure, and constraints it needs — and no more.

Think of it less like writing clever questions and more like writing a specification for a very smart but context-free intern. Anything you assume the model "should know" from context is a risk.

The Universal Principles

These apply to every modern LLM, Claude or GPT or anything else.

  • **Be explicit about the task.** State the goal in the first sentence. Models weight early tokens heavily.
  • **Be explicit about the output format.** If you want JSON, describe the schema. If you want a list, say so. If you want prose, specify length.
  • **Give examples (few-shot).** Two or three concrete input/output pairs usually outperform paragraphs of description.
  • **Separate instructions from data.** Use delimiters — XML tags, triple quotes, Markdown headings — so the model cannot confuse your instructions with user content.
  • **Tell the model what NOT to do only after telling it what TO do.** Negative instructions alone often fail; they work well as guardrails on top of a positive instruction.
  • **Ask the model to think before answering for hard problems.** Chain-of-thought reasoning improves accuracy on non-trivial tasks, and is now built into reasoning-capable models by default.
  • **Constrain ambiguity.** If multiple interpretations are valid, tell the model which one to pick or ask it to ask a clarifying question.

Writing Prompts for Claude

Anthropic's Claude models have particular strengths and idioms that pay off when you know them.

  • **Use XML tags to structure content.** Claude was specifically trained to respect XML tags. Wrap documents in `<document>`, examples in `<example>`, instructions in `<instructions>`. This is the single highest-leverage technique for Claude prompts.
  • **Put long reference material before the question.** Claude pays more attention to material early in the context when it is tagged and referenced later.
  • **Use a system prompt for role and policy.** Claude respects system prompts strongly. Put your persona, the rules, and the high-level goal there; put the task and data in the user turn.
  • **Let Claude think out loud when it helps.** Ask Claude to "think step by step inside `<thinking>` tags before giving the final answer." For Claude Opus with extended thinking, this is usually unnecessary — the model already does it internally.
  • **Trust the instructions.** Claude tends to follow detailed instructions more literally than GPT. Short, under-specified prompts leave more room for the model to make judgment calls, which can be either a feature or a bug.

Example Claude prompt skeleton: ``` You are reviewing a customer support ticket. Classify it and draft a reply. Return JSON with keys: category, urgency (low|medium|high), draft_reply.

{{ticket_body}}

My order hasn't arrived. {"category":"shipping","urgency":"medium","draft_reply":"..."} ```

Writing Prompts for OpenAI Models

OpenAI's GPT and o-series respond best to a slightly different style.

  • **Use Markdown structure.** GPT models respond well to Markdown headings, bullet lists, and fenced code blocks.
  • **Lean on function calling for structured output.** Rather than asking the model to "respond in JSON," define a function schema and call it. The model is forced into valid JSON.
  • **Put the most important instruction last.** GPT models, especially in long conversations, weight the final user message strongly. Reinforce critical constraints there.
  • **For the o-series, prompt less.** o3 and o4-mini do their own reasoning — heavy chain-of-thought prompting can actually hurt. Give them the problem, the constraints, and the required output format. Resist the urge to "guide" them.
  • **Use the `developer` role (system) for policy.** Same pattern as Claude's system prompt.

Example OpenAI prompt skeleton (with function calling): ``` System: You are a classifier for customer support tickets.

User: Classify the following ticket and draft a reply. Ticket: "My order hasn't arrived."

[function schema: classify_ticket(category, urgency, draft_reply)] ```

Advanced Techniques That Pay Off in Production

  • **Prompt caching.** Both Claude and GPT support caching of long static prefixes (system prompts, RAG context). On Claude, the savings are dramatic — up to 90% on cached tokens. Structure your prompts so the static part comes first.
  • **Self-consistency.** For hard problems, sample the model N times and take the majority answer. Expensive but reliably improves accuracy.
  • **Self-critique.** Ask the model to review its own answer against the original constraints before finalising. Catches a surprising percentage of hallucinations.
  • **Retrieval grounding.** Never let the model free-form when facts matter. Retrieve source material and instruct the model to cite or fail.
  • **Structured schemas over prose.** If the downstream system needs JSON, use function calling or response_format: json_schema. Free-form JSON requests still fail.

Anti-Patterns to Avoid

  • Vague asks ("make this better"). Define what "better" means.
  • Walls of negative instructions ("don't do X, don't do Y..."). Lead with what you want.
  • Burying the task under context. State the task up front and put the context below it.
  • Role-play as a substitute for instructions ("You are a world-class expert..." without telling the model what to do).
  • Over-prompting reasoning models. If you are using o3 or Claude with extended thinking, step back.

The Production Mindset

In real applications, prompt engineering is not a one-shot artefact — it is code. Version it, test it, write evals for it, and monitor it in production. When a prompt regresses (because you changed a model, changed the retrieval pipeline, or changed nothing at all), evals are what tell you.

At FindCoder, every AI application we ship includes an eval harness alongside the prompts. A prompt without an eval is a bug waiting to happen.

Ready to put this into practice?

Our engineers can implement this for your business. Let's talk.

Start a Conversation