Most AI development fails because people spec the wrong thing. I start with direction — North Stars that guide decisions — then let specifications emerge as tools, not prerequisites.
Direction-Driven Development starts with why, not what. Before any specification, I establish the direction that guides every decision.
The guiding direction — what you're ultimately trying to achieve. Not a task, not a feature. The reason the product exists.
Concrete objectives under each North Star, built with obstacle anticipation:
Jobs-to-be-Done
User Stories → Features
Read → Plan → Code → Validate
SPECs apply to implementation. Discovery and design come first — or you're specifying the wrong thing.
Specs aren't religion. They're tools. The goal isn't maximum specification — it's specifications that earn their existence by preventing real problems.
Once direction is set and design is complete, these four elements ensure consistent, reproducible implementation:
Explicit boundaries — what's included, what's not. Prevents AI hallucination into adjacent features.
Constraints and requirements. What must always be true, what must never happen.
Anticipated exceptions and how to handle them. The AI knows what to do when things go wrong.
Explicit statements the AI must treat as absolute truth — no questioning, no reinterpretation.
Gather context, understand existing code, identify dependencies
Structure the approach, create SPEC, define success criteria
Execute with AI assistance, following the SPEC exactly
Test against criteria, review with human oversight
Most AI coding failures happen because developers jump straight to "Code" without reading existing context or planning the approach. The AI makes assumptions, those assumptions conflict with reality, and debugging spirals begin.
My workflow ensures the AI always has complete context before generating a single line. Planning happens explicitly, not implicitly. And validation isn't an afterthought — it's built into every cycle.
Before any code, we establish the guiding direction. What problem are you solving? Who has it most acutely? What does success look like? Human judgment defines the destination.
Break the North Star into concrete goals with obstacle anticipation. Each goal has clear outcomes, identified blockers, and If-Then contingency plans. No wishful thinking.
Jobs-to-be-Done analysis. User story mapping. Feature files. This is where we figure out what to build — before committing to how. SPECs emerge here, not before.
Now SPECs matter. Read → Plan → Code → Validate for each task. AI generates, humans review. Every output tested against acceptance criteria. This is where direction becomes code.
As we build, we capture portable artifacts: schemas, contracts, flows, logic. The working code generates its own specification — a Transition Kit that survives stack changes.
AI debugging effectiveness drops dramatically after 2-4 attempts in a single session.
I've observed — and the research confirms — that AI debugging follows a decay curve. The first attempt at fixing an issue has the highest success rate. Each subsequent attempt in the same context has diminishing returns as the AI accumulates conflicting assumptions.
The key insight: decay happens when you keep trying the same approach. The fix isn't just "reset after N attempts" — it's trying genuinely different approaches first:
1. Try three different approaches — not three variations of the same idea
2. Document what was attempted — prevents repeating failures
3. Only then reset — with fresh context and explicit learnings
This isn't giving up — it's engineering discipline. The documentation of failed approaches becomes institutional memory that survives the reset.
I don't chase the latest AI model announcements. I go deep on one stack — Anthropic's Claude — because depth beats breadth when you're building production systems.
If Claude can't do something, I build the tooling myself. If I can't build it, then — and only then — I look elsewhere. This discipline ensures I understand my tools completely, not superficially.
The result: predictable behavior, accumulated expertise, and no surprises when it matters most.
What happens when you need to rebuild in a different stack? My specs aren't throwaway docs — they're a portable Transition Kit that survives technology changes.
schemas/*.yml
Your data model — entities, relationships, constraints. Generates ORM models in any language.
contracts/*.yml
Your API surface — endpoints, request/response formats. Maps to routes in any framework.
flows/*.yml
Your user journeys — step-by-step behavior, not wireframes. Rebuilds in any UI framework.
logic/*.yml
Your business rules — validation, calculations, edge cases. The "why" that survives rewrites.
Rebuild in Next.js, React Native, Go, Phoenix — the kit travels with you. This is the difference between specs that die with the codebase and specs that outlive it.