Prompt Style & Patterns
Reusable prompt patterns that produce stable outputs and accelerate iteration.
My prompting style is structured and testable: explicit definitions, strict output contracts, and an iteration loop that treats prompts like versioned artifacts.
Pattern 1: Declare the Role + Task
Start by pinning down what the model is (and isn’t) doing. This reduces “creative reinterpretation.”
Pattern 2: Provide Definitions, Not Hints
If you want consistent scoring, give tight definitions. Otherwise you get vibes and drift.
Pattern 3: Enforce an Output Contract
“Return JSON only” prevents narrative junk, makes parsing easy, and supports automated pipelines.
Pattern 4: Constrain the Scale
Where possible, define score ranges and what high/low means to avoid score inflation.
Pattern 5: Build an Iteration Loop
Prompts evolve. I adjust one variable at a time, re-run samples, and keep what improves stability.
- Start with a baseline prompt
- Run a small test set
- Identify failure mode (drift, verbosity, key missing, scale bias)
- Change one thing
- Re-run and compare
Concrete Example
A full prompt example used to score nine narrative dimensions across a text sample.
Note on Naming
“Prompt engineering” is treated as a real engineering skill here: it’s a repeatable method with constraints, contracts, evaluation criteria, and versioned iteration.