Stop Telling Agents What to Do. Tell Them What You Want.
2026-04-13
If you were an early adopter of LLMs and AI agents, there's a good chance you picked up a habit that's now quietly holding you back: telling the agent how to do something instead of what you need done. It felt like precision. It's become a ceiling.
The old way made sense at the time
When the first generation of coding assistants appeared, you had to be precise. Do this. Then do that. Check this condition. Handle that case. The model wasn't reasoning — it was pattern-matching. So you laid out a plan, because a plan was all it could follow.
That shaped how a lot of us learnt to prompt. Specific. Prescriptive. A series of instructions the model could execute like a script.
The irony is that the people most likely to be doing this are the ones who got in early — who were already using agents when everyone else was still sceptical. And a year later, that habit is holding them back. It's also a little strange to call this the "old way" at all. We're talking about instincts formed twelve months ago. That's how fast things have moved.
It made sense then. It's a liability now.
Prescriptive prompting is a ceiling
When you write a prompt that says "first do X, then do Y, then check Z", you've already decided the solution. You've constrained the agent to your own mental model of the problem. And your mental model might be wrong, or just not the best one.
The agent can't find a better path if you've already paved the road.
I've caught myself doing this — writing elaborate step-by-step prompts, then being mildly annoyed when the output was only as good as my instructions. Of course it was. I'd removed the agent's ability to improve on them.
Goal-based prompting gets out of the way
The alternative is to describe the outcome. Not the steps — the end state. What does "done" look like? What are the constraints? What does failure look like? Then let the agent figure out the route.
This is a meaningful shift. It moves you from directing the agent to briefing it. And agents — the modern ones with genuine planning and reasoning capabilities — respond to that differently. They explore. They backtrack. They find approaches you wouldn't have thought to specify.
Your job becomes defining the goal clearly, not solving the problem in advance.
It holds up better as models change
There's a practical side to this beyond quality of output. Prescriptive prompts are brittle. They're written for a specific model's behaviour at a specific point in time. When the model changes — and they change constantly — the prompt often breaks in subtle ways, because it was tuned to a particular set of quirks.
Goal-based prompts are more durable. The goal doesn't change when the model does. You describe what success looks like, and each new model version can find its own way there.
Multiple agents, one problem
There's another thing goal-based prompting enables that prescriptive approaches can't: running multiple agents on the same problem and getting something useful from each.
If you've hardcoded the steps, every agent produces the same output. But if you've described a goal, different agents — or the same agent with different context — can take different paths, apply different priorities, surface different trade-offs. You can then compare, combine, or use the outputs to stress-test each other.
That's actually useful. Prescriptive prompting collapses that possibility before it begins.
The shift is small but the effect compounds
You don't need to throw away everything you've written. But the next time you sit down to brief an agent, try holding back the how. Describe the outcome. Set the constraints. Trust the agent to find the path.
More often than not, it'll find one you wouldn't have written yourself.