Prompt Style & Patterns

Reusable prompt patterns that produce stable outputs and accelerate iteration.

My prompting style is structured and testable: explicit definitions, strict output contracts, and an iteration loop that treats prompts like versioned artifacts.

Definitions first
Output contract
Constraints
Evaluation criteria
Iterate + version

Pattern 1: Declare the Role + Task

Start by pinning down what the model is (and isn’t) doing. This reduces “creative reinterpretation.”

You are a precise [domain] analyst tasked with [specific outcome].

Pattern 2: Provide Definitions, Not Hints

If you want consistent scoring, give tight definitions. Otherwise you get vibes and drift.

Use the following definitions:\n1) ...\n2) ...\n3) ...

Pattern 3: Enforce an Output Contract

“Return JSON only” prevents narrative junk, makes parsing easy, and supports automated pipelines.

Return JSON only, no explanation:\n{ ... fixed keys ... }

Pattern 4: Constrain the Scale

Where possible, define score ranges and what high/low means to avoid score inflation.

Score each dimension 0.0–1.0 (or specified range) based on evidence in the text.

Pattern 5: Build an Iteration Loop

Prompts evolve. I adjust one variable at a time, re-run samples, and keep what improves stability.

  • Start with a baseline prompt
  • Run a small test set
  • Identify failure mode (drift, verbosity, key missing, scale bias)
  • Change one thing
  • Re-run and compare

Concrete Example

A full prompt example used to score nine narrative dimensions across a text sample.

Groq Prompt (Example)

Note on Naming

“Prompt engineering” is treated as a real engineering skill here: it’s a repeatable method with constraints, contracts, evaluation criteria, and versioned iteration.