Experience report
From Code to Intent
One year of AI workflow on an open source project
Field notes on a year of AI-assisted development — the fumbling, the realizations, and the adjustments along the way.
01 — The starting point
May 2025. I launch Ezkey, an open source MFA project, as a solo dev. Total greenfield. Everything goes on main — no branches, no tickets. The goal is simple: maximum velocity.
AI coding assistants are still young. They don’t spontaneously create work plans, let alone preserve them across sessions. But they code well. I gain confidence quickly: the AI generates working Spring Boot code, tests pass, JPA entities fall into place. The velocity is real.
What I haven’t realized yet is that this speed will force me to completely change the way I work.
02 — The documentation wake-up call
Very quickly, I’m stunned by the amount of information scrolling past me. The AI produces code, analyses, suggestions — and everything vanishes at the end of the session. I realize I’m losing value with every conversation.
On my own initiative, I start systematically asking for a save to file: work plan, analysis, decisions made. It’s artisanal, but it works. A few months later, coding agents natively integrate “plan mode.” I keep the habit, but it matures.
Fast forward to early 2026: every feature starts with a versioned work plan. When the work is complete, the plan is annotated and archived in a dated folder (e.g. .cursor/plans-archived-2026-03/). History is preserved. The feature and its documentary trace are linked.
The docs/ folder I had almost neglected at the start has become a living corpus. I regularly consolidate: I take more or less historical analysis files, ask the AI to evaluate them, dispatch relevant information, and reduce noise. The coding assistant excels in this librarian role.
03 — From guilt-free trunk-based to worktrees
For months, I feel slightly guilty. Everyone does branches, pull requests, reviews. I push to main. Yet it works. The code compiles, tests pass, the project moves fast.
Then I come across teams openly documenting their adoption of trunk-based development. The guilt fades. My workflow — maintaining velocity with minimal Git ceremony — is a legitimate choice, not a shortcut.
A bit later, I add Git worktrees. The principle: three working directories, each on its own branch, but all synchronized to main. The first for analysis and design. The second for the mobile app. The third for backend APIs.
The benefit is immediate. I can run a conceptual analysis on one worktree while another generates and tests code. Merges are simple — often a fast-forward — because the strategy stays trunk-based. And it’s a clean context switching tool: changing subjects no longer means “losing the thread” — the worktree preserves the full state.
04 — The progressive elevation
Here’s what happened, gradually, over one year:
At first, I write individual prompts. “Generate a REST controller for me.” I’m in the bottom layer: I produce code.
A few weeks later, I no longer code. I design. I describe entities, relationships, flows — and the AI generates the implementation. But I realize my designs carry my biases. I move up a notch toward analysis: I ask for alternatives, compare with similar projects, have trade-offs evaluated.
Then comes the deepest shift. Today, before even thinking about analysis, I frame the intent. Who will use this feature? What does their daily work look like? What real problem are we solving? And I do it with the accumulated documentary corpus — PRD, previous analyses, archived plans.
The AI, anchored in a well-organized corpus, becomes a product owner partner. It knows the entities, the constraints, the past decisions. The quality of the dialogue rises — and decisions converge faster.
05 — Liberating thought: voice dictation
November 2025. I adopt a voice dictation tool to interact with the coding assistant. It’s a change I hadn’t anticipated — and it turned out to be radical.
When you type, you self-censor. You economize words because writing takes time, your hands tire, and you end up truncating your thought. You give a minimal prompt and hope the AI will guess the rest. Often, it guesses wrong — and you iterate.
When you dictate, you think out loud. Thought flows freely from one concept to another. You don’t search for the exact word — you express the complete intent, with its context, its nuances, its values. This isn’t context overload: it’s relevant framing that your fingers would never have bothered to type.
This is directly tied to project values. When you can freely state all the considerations relevant to a topic — constraints, operational context, principles to respect — the AI receives framing of incomparably greater richness than a typed prompt would have provided. The quality of its responses improves immediately.
What I discovered is that voice dictation is the tool that makes the upper levels of the elevation pyramid truly accessible. Stating intent, recalling values, framing operational context — all of that requires words, many words, and the voice frees them effortlessly. It creates a fusion of ideas with the AI that the keyboard alone could not.
06 — Conceptual integrity at scale
At first, I used the AI for technical integrity: standardizing Java records, enums, naming. Then at the design level: do all DTOs follow the same mapping pattern? Are error responses consistent across modules? The AI evaluates across the codebase and reports gaps. Standardizing a cross-cutting pattern is exactly what it does better than a human.
Then I moved up again. Activation, deactivation, revocation, deletion — these concepts cut across every Ezkey entity. I had addressed them in separate work plans, and that produced inconsistent behaviors. If you deactivate a tenant, what effect on enrollments? On API keys? On integrations?
The revelation: the problem isn’t any individual plan — each one works. The problem is the absence of a cross-cutting view. I produced a global document, an analysis that takes all entities together to prioritize and organize lifecycle concepts. And the AI, fed this complete corpus, was able to play a first-class analysis partner role.
| Integrity level | Concrete example |
|---|---|
| Code | Standardize Java records, enums, naming |
| Design | Verify that DTOs, mappers, and error responses follow the same pattern |
| Analysis | Ensure lifecycle consistency across all entities |
| Intent | Every design decision is anchored in a real operational need |
07 — Project values as a compass
There’s a concept I repeat constantly in Ezkey that deserves attention: project values. These aren’t wishful thinking posted in a README. They are explicit principles, stated, repeated, and used as decision criteria at every level — code, design, analysis, product strategy.
Why is this so important with AI? Because the AI needs framing. Without explicit values, it will optimize according to its own statistical biases. With clearly stated values, it has a heading to follow. And so do I.
Here are Ezkey’s project values:
- The 80/20 rule. Aim for 80% of the effect with 20% of the effort. Constantly. At every level. It’s a ruthless filter against gold plating and over-engineering.
- Pragmatism. The simplest solution that meets the need. Not the most elegant, not the most generic — the most pragmatic.
- Essential complexity yes, accidental complexity no. Complexity that directly serves the objective is accepted. Unnecessary indirection, premature abstraction, extra layers “just in case” — rejected.
- Eat your own dog food. The system must be self-contained, integrated, coherent. No external dependency you don’t control. Internal integrity isn’t a luxury — it’s a design goal.
- Follow community consensus. For every decision, ask: what do similar projects do? What would a developer coming from another product consider normal and expected? If the answer is “what on earth is this?”, that’s a red flag. If it’s “yep, makes sense, nothing to see here”, you’re on the right track.
These values are repeated throughout the documentary corpus, and the AI integrates them into its framing. When I request an analysis, the AI already knows the answer must be pragmatic, follow the 80/20 rule, and avoid accidental complexity. This eliminates enormous noise from exchanges.
It’s also a debiasing tool. When I’m tempted by an overly sophisticated solution, the values bring me back to essentials. When the AI proposes something exotic, I can ask: “Is this what we most commonly observe in comparable projects?” The response recalibrates instantly.
08 — Critical thinking as a key skill
In the summer of 2025, like many, I felt somewhat dispossessed. Forty years writing code for the joy of it, mastering every syntactic subtlety. Watching code produce itself before my eyes and thinking: “I’m not doing anything anymore.”
Then the velocity became real. And I understood: the challenge isn’t writing the code — it’s raising its quality. Javadoc, unit tests, analysis documents: what used to be “nice-to-have best practices” became a fundamental condition for AI assistance to remain reliable.
But the real change was learning to contain my own biases. Current AI is an advanced statistical tool: it steers its responses according to probabilities. If I phrase a question with a bias, the AI digs in. If I say security is the number one criterion, the decisions will be radically different from those where I prioritize performance. The AI reflects and amplifies the framing you give it.
The strategies I adopted:
- Ask for alternatives. Never commit to an idea without seeing the options. “What are the three possible approaches? Rank them by trade-offs.”
- Ask for comparisons. “How do similar projects handle this problem?” This opens a horizon of possibilities you wouldn’t have envisioned alone.
- Adjust your vocabulary. I learned to phrase questions more neutrally to avoid tilting the AI in a predetermined direction.
- Don’t dig in. As soon as you feel escalation of commitment — you’ve invested in a path and refuse to backtrack — it’s time to request an objective status check.
This happens all too easily with AI. But it’s also something we should do every day, with everyone. AI simply forces us to improve a skill we were neglecting: rigor in our own thinking.
I don’t claim to get this right every time. Biases are stubborn, and I still regularly find myself committed to a path before realizing I should have stepped back three stages earlier. Knowing it isn’t enough — you have to practice it, and it’s an ongoing exercise.
09 — The current workflow
Today, every initiative follows a predictable cycle:
- Intent — Who is the user? What is their real need in their daily work?
- Conceptual analysis — How do domain entities interact to fulfill this intent?
- Design — Known patterns, compared alternatives, documented decisions
- Code generation — The AI implements, tests validate
- QA and iteration — Conceptual bugs → design reframing → short loop
Most of the time, the code works essentially on the first pass. Remaining issues are technicalities (traces, configuration) or design limitations that get challenged by moving back up to the right level.
Worktrees allow running this cycle in parallel across different topics. The challenge becomes managing switching cost and the ability to evaluate results.
Mastering context
The more you elevate toward design and analysis, the more the context fed to the AI must be relevant — and the more it risks becoming too voluminous. I learned the hard way that context overload degrades response quality just as much as context absence.
The strategy that made the difference: slice into verticals and chain plans. Concretely: when I modify a backend API, it gets a dedicated work plan, centered on that vertical. Once the backend is validated and tested, I generate the updated OpenAPI specification and dispatch it to sub-projects — the admin UI, the mobile app. Each sub-project starts its own plan, in its own session, with targeted context.
The other lever: versioned work plans become reusable condensed context. At the start of a new plan, it’s enough to tell the AI: “go read the previous work plan.” The archived plan contains the decisions, constraints, design choices — already condensed. It’s ultra-relevant context with minimal tokens. Context window efficiency is maximized.
It’s a compounding effect: every well-written document improves the quality of all future exchanges, and every archived plan becomes a condensed entry point for the next iteration.
10 — What I take away
What I describe here is an ideal I’m working toward — not a reality I live perfectly every day. There are days when I fall back to the quick prompt, skip the plan, or dig into a path without challenging it. The “perfect” workflow doesn’t exist. What exists is the direction.
AI doesn’t replace thinking. It amplifies it. Cognitive load decreases on the code side and increases on the design side — and that’s exactly where it delivers the most value.
The prerequisite is investing in the documentary corpus. The more rigorously you document, the more substance the AI dialogue gains. It’s a virtuous cycle that compounds with every iteration.
The antidote to the echo chamber risk is sharpened critical thinking, anchored in explicit project values. Stay objective. Ask for alternatives. Accept being wrong and stepping back.
I share these notes without claiming to have found the definitive formula. This is a work in progress — the project evolves, the tools evolve, and my understanding does too. If some of these ideas resonate with your own experience, great. If they seem incomplete, they probably are.
French: Version française de cet article
See also: AI coding manifesto