The Pragmatic Prompter · Part 1

Stop wrestling your AI

Three prompting tricks that produce code you can actually ship — and actually understand.

ru5ty · 5 min read

Most AI-generated code has the same problem: it looks plausible, compiles fine, and then slowly poisons your codebase with hallucinated assumptions and write-only spaghetti. These three tricks fix that at the prompt level — before a single line gets written.

The default mode of working with an AI coding agent is to dump a task on it and hope for the best. That produces default-quality output: generic, over-engineered, and riddled with assumptions the model made up because you didn't give it a reason to do otherwise. The fix isn't longer prompts — it's structurally different prompts.

Here are three tricks I've found that consistently shift the output from “technically functional” to “genuinely useful.” Each one targets a different failure mode.

Trick 01

Let the AI interview you

When you're exploring a problem — scoping a system, choosing an architecture, working through requirements — the temptation is to write a big upfront prompt explaining everything. The problem is that you don't know what you don't know, and the model will happily fill the gaps with plausible-sounding nonsense rather than asking.

The fix: flip the dynamic. Tell the AI your objective, then instruct it to ask you one question at a time, letting each answer inform the next question.

Prompt
I need to design an event-driven pipeline for
processing incoming vendor invoices.

Don't start building yet. Ask me one question
at a time about my requirements, constraints, and
existing systems. Let each of my answers shape your
next question. When you have enough context, tell me
and we'll start on the design.

This does two things. First, it forces the model into a discovery mode instead of a solution mode, which means it gathers real constraints instead of inventing convenient ones. Second, it surfaces questions you hadn't thought to answer — edge cases, integration points, failure modes — because the model is drawing on a broad pattern base to probe your specific situation.

Trick 02

Make it read before it writes

This one is deceptively simple, and the difference it makes is enormous. When you ask an AI agent to analyse or modify an existing codebase, it will — left to its own devices — skim file names, infer structure from conventions, and start generating based on what it assumes is in the files. The output looks right. It compiles. And it's subtly wrong in ways you won't catch until production.

The fix: explicitly instruct the agent to read each file before reasoning about it.

Prompt
Before analysing or modifying anything, read each
file you need to reference. Do not assume what is in a file based on its name or path.
Open it, read the contents, and base your analysis on
what is actually there.

Without this instruction, the model pattern-matches against its training data. It sees UserService.ts and assumes a standard service class with CRUD methods. It sees config.yaml and assumes a typical structure. Your actual code might follow completely different patterns — and the model will confidently generate analysis and modifications that don't fit.

Trick 03

Tell it someone's learning from this

Default AI-generated code optimises for completion — getting to a working solution by the shortest path. That path runs straight through cryptic variable names, collapsed logic, missing context, and either zero documentation or the worst kind: verbose comments that restate what the code already says.

The fix: tell the agent to write as if this is a learning project, and that the person reading the code will be learning from it.

Prompt
Write clean code as if this is a learning project.
Assume the reader will be learning the codebase from
your output. Avoid verbose documentation — focus on clear naming,
logical structure, and concise comments that explain
why, not what.

This single instruction shifts the entire output profile. Variable names become descriptive. Functions get smaller and single-purpose. Complex logic gets broken into readable steps. And the documentation hits the sweet spot: not the useless // increment counter style, but genuine insight into intent and design decisions.

The key qualifier is “avoid verbose documentation.” Without it, the model interprets “learning project” as “bury every line under a paragraph of explanation.” With it, you get documentation that respects the reader's intelligence — short comments where the why isn't obvious, clear naming everywhere else, and nothing that restates what the code already communicates.

None of these tricks are complicated. They don't require prompt engineering certifications or elaborate system messages. They work because they target the three places where AI code generation most reliably goes wrong: assuming context it doesn't have, assuming file contents it hasn't read, and optimising for completion instead of comprehension.

Fix those three failure modes at the prompt level, and the code you get back is a different class of output entirely.

More tricks incoming

This is Part 1 of an ongoing series. Stay tuned.