Generative AI and No-Code / Low-Code: A Parallel (2018–2022)
1. Historical context (2018–2022)
During this period, the tech industry saw explosive growth in No-Code and Low-Code platforms (Mendix, OutSystems, Bubble, PowerApps, etc.).
The promise
These platforms promised a revolution:
- Democratization: let “non-experts” (domain experts) build applications.
- Velocity: development “10× faster” than traditional software delivery.
- Autonomy: an end to dependence on often overloaded IT departments.
The fear (the “great replacement” of developers)
A palpable worry spread among some IT professionals:
- fear of being replaced by drag-and-drop tools;
- the idea that raw technical skill (writing C++, Java, or Python) would lose all value to visual interfaces.
2. The parallel with AI-assisted coding (today)
Today, LLMs (Large Language Models) and coding assistants (GitHub Copilot, ChatGPT, etc.) trigger similar dynamics.
- New interface: natural language replaces the No-Code GUI. Instead of dragging a button, you ask the AI to “create a blue button.”
- New speed promise: again, the story is massive productivity gains.
- Fear returns: “Will AI replace developers?” is everywhere.
3. Analysis: what actually happens
In hindsight, developer fear wasn’t justified — for two major reasons that apply to AI today as well.
A. The unavoidable need for specification
What really happened is that “expert users” (citizen developers) who jumped in soon hit a wall: logic.
- Code isn’t the only obstacle: even in No-Code mode, you must know what you want to build.
- Rigor and design: you can’t just “wire a bunch of cables together” and hope for magic. You need clear specifications, strategy, and at least some systems analysis and design.
- AI parallel: today, “prompt engineering” is just another form of tight specification. If the user can’t structure the ask, the AI produces incoherent code.
B. The platform trap (vendor lock-in) and proprietary architecture
A major barrier to mass adoption was the No-Code market structure itself.
- The wrapper tax: each platform had to invest heavily to wrap standard libraries and expose them as visual components (“wiring available”). Users were limited to what the vendor chose to implement and maintain.
- Strategic rejection: decision-makers often rejected these platforms despite efficiency claims. Adoption meant immediate vendor lock-in. A root cause of non-adoption: technological dependency risk outweighed theoretical productivity gains.
- Trapped in the tool: there’s no interoperability. Migrating from one No-Code solution to another is a nightmare because business logic is locked in a proprietary format.
- One-way import: newcomers offer import tools to capture the market; export barely exists.
C. The break: interchangeable AI models
The fundamental difference with modern coding assistance is the absence of lock-in:
- Interchangeability: LLM models are interchangeable. Building blocks aren’t proprietary widgets but standard libraries (e.g. Spring Security). A trained model can use those standards to generate code from specifications without tying the user to one platform.
Practical nuance: interchangeability isn’t free
If interchangeability is technically real (unlike No-Code), it still has friction:
- Capability variance: models differ in context size, reasoning quality, supported languages. A prompt tuned for GPT-4 needs tweaks for Claude or Gemini.
- Integration ecosystems: tools create “soft lock-in” (GitHub Copilot ↔ VS Code, Cursor with proprietary features). Switching means relearning workflows.
- Prompt engineering investment: prompt techniques aren’t universal. Moving models takes relearning effort.
Yet this friction is fundamentally different from No-Code vendor lock-in: business logic stays in standard code (Java, Python, etc.), not a proprietary format. Migration is an adaptation cost, not a full rewrite. That’s major progress — even if it shouldn’t be sold as zero-cost transfer.
Amplifying capability
- For non-experts: access to building applications through “vibe coding” or a spec-first approach.
- For experts: AI acts as a multiplier. Experts gain efficiency because they can steer generation toward optimal code.
Important nuance: real levels of autonomy
Two realities must be distinguished:
- Prototyping and exploration: a non-expert can indeed ship working prototypes quickly with AI. That genuinely democratizes experimentation and idea validation.
- Production and maintenance: code that runs ≠ professional-grade code. Judging generated quality (security, performance, maintainability, standards) remains expert work. Code that “works” isn’t necessarily code that “should be deployed.”
So AI doesn’t replace expertise; it shifts the skill bar: where five years of experience used to be needed to be productive, maybe two or three suffice today. But leaping from “non-developer” to “autonomous professional developer” remains a myth. AI is an accelerator, not a substitute for foundational learning.
4. The constant: context engineering (Spec-First)
Despite the technology shift, one truth holds from the Low-Code era to the AI era: you need clear specifications.
You can’t skip thinking. For a system (Low-Code or AI) to produce a valid result, you need strategy and design.
That’s what we now call context engineering or a Spec-First approach. Generation quality depends directly on the quality of the frame and specs fed to the model.
The underestimated difficulty: specifying well is a craft
It’s easy to say “just specify well,” but that’s misleading:
- Expertise required: writing complete, unambiguous, structured specs takes experience. It’s a skill built over time.
- Delicate balance: too much detail → the AI is constrained and loses creativity. Too little → generated code goes the wrong way. Finding the right abstraction level is iterative.
- Real time cost: quality specs take time. For some simple tasks, coding directly is still faster than specifying then generating.
A pragmatic approach
- Start light: you don’t need a heavy framework (BMAD) for every task. Match ceremony level to risk and complexity.
- Iterate and learn: first specs will be imperfect. That’s normal. AI can help refine specs too (feedback loop).
- Compound over time: document patterns that work; build reusable templates. Spec-First investment pays off over the long run, not on day one.
Spec-First isn’t a silver bullet — it’s a progressive discipline that, mastered well, greatly amplifies AI effectiveness. It’s an investment, not a perfect prerequisite on day one.
5. Strategy: toward hybrid, code-first documentation (takeaways)
To succeed with AI in development, you must solve the documentation strategy challenge. AI needs context to perform, which forces a rethink of where and how we document systems.
The tension: Confluence/Jira vs code-first
There’s natural tension between traditional tools and AI needs:
- Confluence & Jira: still strategic for product vision, product-owner needs, and high-level architecture.
- Code-first documentation: for effective AI (“context engineering”), it needs technical specs, mappings, and detailed functional flows in the code context (repository).
Evaluating external bridges (MCP-style approaches): a context question
One approach is keeping everything in Confluence/Jira and using technical bridges (e.g. MCP servers, integration plugins) to connect AI to those sources. That can work — but deserves contextual evaluation:
When bridges can make sense
- Living product documentation: if business specs change daily in Confluence and your PO team lives there, a single source of truth may justify the investment.
- Distributed teams: when QA, PO, and Dev share artifacts, centralizing in one tool has real organizational value.
- Compliance and audit: some regulated contexts require centralized traceability that enterprise tools provide natively.
Hidden costs to weigh
- Technical overhead: initial setup, auth maintenance, API version management, integration debugging. Every developer must configure and understand these tools.
- Cognitive latency: the AI navigates multiple contexts (code + external wiki), which can dilute precision and raise token costs.
- Technical debt: bridges add dependencies. What if the bridge vendor disappears or changes its API?
Recommendation: separation of concerns
For most cases, a pragmatic hybrid approach still wins:
- Vision and needs (Confluence/Jira): the “why,” user stories, high-level architecture.
- Technical specifications (in-repo): the “how,” detailed analyses, mapping rules, API contracts — everything the AI needs to generate quality code.
That split reduces technical complexity while respecting existing workflows. It isn’t dogma: if your org has heavily invested in MCP integrations and they work, there’s no need to rip them up. What matters is measuring real ROI and not adopting bridges by default without evaluating simpler alternatives.
Ceremony levels (specification frameworks)
Several approaches structure in-repo documentation with different levels of formality (“ceremony”). The landscape has shifted since early AI coding assistants: major IDEs now ship Plan-style workflows (low ceremony inside the editor), while light in-repo formats such as Open Specs sit alongside spec kits under medium ceremony.
- No ceremony (free exploration): no predefined structure. Essential for fluid conceptual exploration where checklists or templates would stifle creativity.
- Low ceremony (e.g. IDE Plan mode): structured planning offered by leading AI coding IDEs (VS Code ecosystem, Cursor, and similar)—intent and steps are shaped in the editor before code generation, without adopting a separate in-repo specification framework.
- Medium ceremony (e.g. Open Specs, GitHub Copilot Spec Kit): light, informal capture of intent and design in the repository, plus more standardized spec-kit layouts—enough structure for reliable AI consumption of written specs, without BMAD-level weight.
- High ceremony (e.g. BMAD): very formal, resource-heavy (human and AI), for critical systems demanding absolute precision.
AI as a first-class partner (fluid interaction)
Beyond documentation, the interaction mode changes. AI isn’t just completion — it’s a thinking partner.
- Voice interaction (e.g. Wispr Flow): voice enables unmatched fluidity for clarifying ideas, unblocking analysis, or “pair programming” with AI.
- Active collaborator: treating the AI assistant as a first-class partner for debugging and behavioral analysis is a winning strategy.
Note on data and experimentation
This document offers qualitative analysis based on market trends and the historical analogy with No-Code. It’s important to recognize:
Metrics vary by context
Reported AI productivity gains range from 20% to 200% depending on studies, languages, and tasks. That variance means there is no universal “magic number.”
Current studies (GitHub, McKinsey, Stack Overflow Developer Survey 2024–2025) agree on real but context-dependent gains: unit tests (+40–60%), refactoring (+30–50%), exploring unfamiliar codebases (+70–100%).
Invitation to measured experimentation
Rather than relying only on external metrics, every team should:
- Run its own pilots: pick 2–3 volunteer developers, define clear KPIs (task completion time, code quality via review, developer satisfaction).
- Measure before/after: baseline typical tasks before AI adoption, then compare after 4–6 weeks of use.
- Document learnings: failures teach as much as wins. Which task types benefit most? Which are poorly served?
Further reading
- GitHub Next Research (github.com/github-next): empirical studies on Copilot.
- State of AI Report (annual): macro market trends.
- Your own experience: the best “number” is the one you measure in your context.
This document’s goal isn’t to prove by A + B that AI is “the answer,” but to structure strategic thinking for informed, pragmatic adoption.
Conclusion: the hybrid approach
Success isn’t “everything in code” or “everything in the wiki.” It takes a hybrid approach:
- In Confluence/Jira: vision, the “why,” business requirements.
- In code (Spec-First): the “how,” detailed technical analyses, mapping rules.
By adopting this specification discipline (context engineering), you turn AI from a mere completion gadget into a true systems development partner.
French: Same article in French
See also: AI coding manifesto