This essay is best read as a stream of consciousness and a running set of updates, not a flag in the ground that leaves no room for change. My views on AI tooling are evolving quickly as the models improve, and I expect this piece to evolve with them.
As engineers and builders, we have to decide intentionally how AI fits into our workflow. The models are getting much better at understanding messy prompts and inferring intent, but better models do not remove the need for better process.
If you open every AI session cold with no structure, you often leave good output on the table. The quality jump for me came from treating AI less like a magic answer box and more like a collaborator that performs best when I provide momentum and constraints.
§ § §
The case for intentional workflows
I treat AI like a force multiplier for momentum, not a replacement for judgment. For me, "force multiplier" means I still need a decent mental model of the final output, plus a rough map of the steps the model should take to get there. Without that, it is easy to get polished output that is directionally wrong.
It also means being situationally aware. There are moments when the right move is to step out of the way and let the model run with minimal interruption, and there are moments when I should take control immediately to course-correct architecture, constraints, or intent. The skill is not just prompting; it is knowing when to drive and when to delegate.
Context, plan, build
A practical loop I keep coming back to is:
- Build context.
- Make a plan.
- Build and iterate.
This pattern works across almost everything I do: drafting docs for my team, writing first-pass communications, shaping product thinking, and implementing code. It is tool-agnostic and simple enough to use daily.
The context phase is where most wins come from. I front-load what matters: goals, constraints, surrounding code, and what success looks like. The planning phase turns that into concrete steps. Then I execute in short loops, constantly checking if the output matches intent.
What I still own
I still treat judgment as my job.
- I do not ship code I cannot explain line by line.
- I still own architecture and tradeoffs.
- I still do final passes for readability and correctness.
- I use AI to reduce startup friction, not outsource taste.
Addendum (Mar 3, 2026)
Newer models (including Claude Opus 4.6 and GPT-5.3 Codex) have me rethinking how strict this loop needs to be every time. This is the first model generation where a rough, out-of-order prompt often still lands close to what I wanted.
In many cases, I can now give a messy instruction dump and the model will:
- structure the request,
- infer the plan,
- execute the work,
- and self-correct in one pass.
It is not perfect, but it is reliable enough that I intervene less than before. I think many tools are moving toward this direction quickly, and this "implicit planning" behavior will only improve in the next few months.