I've been building software with AI coding agents for a while now, and something has shifted that I didn't fully expect. It's not just that building software is faster. It's that the whole dynamic of building feels different—and the closest analogy I can find, even though I'm neither a musician nor a writer, is solo creative work, where the gap between idea and expression collapses to almost nothing.
The common thread through everything I've noticed is this: AI compresses the feedback loop between idea and working code. That sounds simple, but the implications run surprisingly deep. These are my early observations—not conclusions, but patterns worth naming and exploring further.
Overview
Four themes have emerged from this experience. Each is explored in depth below.
AI restores the tight loop between idea and implementation, making development feel more like solo creative work.
AI reliability tracks how much of a given language or framework it has seen. Calibrating trust accordingly changes how you work.
Prompt specificity should match your confidence in the solution, not the complexity of the task.
When building is cheap, the logic of traditional work management inverts. You build to discover, then curate.
1. The Solo Creator Dynamic
The Observation
Traditional software development broke the loop between idea and execution. Even in lean environments, there is coordination overhead between conceiving something and seeing it work. You depend on others, you wait, and the moment of creative momentum passes.
AI coding changes this. The gap between idea and implementation shrinks to minutes, and the experience begins to resemble other solo creative disciplines—writing, music, design—where it is just you and the work. You try something, see it, react, change it. The feedback is immediate and iteration is cheap.
The bottleneck has moved from "can I build this?" to "will I keep questioning it?"
The Nuance
The speed is only an advantage if you use it to iterate. The temptation with AI is to accept the first result that works, because it arrived so fast that it feels like you already did the creative work. Accepting "good enough" quickly is not the same as building something well quickly.
The discipline required has changed. In traditional development, the challenge was execution—getting the thing built at all. With AI, execution is largely solved. The new challenge is curiosity: the willingness to question what you have, try a different direction, and ask whether the result is actually good rather than merely functional.
AI coding shifts the creative constraint from capability to curiosity. The question is no longer whether you can build it—it is whether you are willing to keep improving it.
2. The Training Density Effect
The Observation
AI coding agents are not uniformly capable across all languages and frameworks. Performance varies significantly depending on how much of a given domain exists in the model's training data. High-density domains—HTML, JavaScript, CSS, Python—produce reliable, accurate outputs with minimal iteration. Low-density domains—AWS CloudFormation, Swift, niche infrastructure tools—require far more cycles.
The failure mode in low-density domains is particularly frustrating: the model confidently states it has fixed a problem, implements a plausible-looking solution, and the same error recurs. This is pattern-matching to what a fix should look like, without genuine understanding of the specific error. Resolving these issues sometimes required 5–10 iterations.
A Tiered Trust Model
Recognizing this pattern suggests a practical approach: calibrate your trust and workflow to the density of the domain you are working in.
High-Density Domains
HTML, JS/CSS, Python, React, common APIs
- Use AI more autonomously
- Review outputs at a higher level
- Trust the first result more readily
Low-Density Domains
CloudFormation, Swift, niche SDKs, infra-as-code
- Treat AI as a thought partner, not sole implementer
- Verify outputs more aggressively
- Expect the iteration loop—budget for it
The Meta-Skill
There is a learnable skill in recognizing when you have entered a low-density context before burning several iterations. Signals include: domain-specific error messages the model seems to misread, unusual stack traces, niche configuration syntax, and answers that are plausible but subtly off in ways that are hard to articulate.
When those signals appear, the right move is to shift your posture—slow down, read the model's output more critically, and consider whether you need to provide more context or break the problem into smaller pieces.
AI proficiency tracks training exposure. Calibrate your trust and workflow to the density of the domain. In high-density areas, accelerate. In low-density areas, verify.
3. Knowing When to Constrain the Problem
The Observation
There is a persistent question in AI-assisted work about how specific to be in prompts. The instinct is often to be more specific—more detail, more constraints, more direction. But experience suggests this is not always the right move, and the pattern of when to constrain versus when to open up is learnable.
A concrete example: when test output was difficult to interpret, the initial impulse was to specify exactly how the output should change. Stepping back and asking the broader question—"how could this output be improved for readability and interpretability?"—produced better results. The model surfaced formatting approaches that would not have been specified upfront.
The Underlying Pattern
The decision about specificity comes down to who holds the better model of the solution space at that moment:
- Be specific when you know what good looks like and need execution. You have the answer—you need the implementation.
- Be general when you are optimizing for something but do not know the solution space. Let the model surface options you have not considered.
Be general about the approach but specific about constraints. For example: "Improve the test output readability—it needs to work in a terminal and take under five seconds to scan." This gives the model latitude to innovate while bounding the solution to what actually matters.
Prompt specificity should match your confidence in the solution, not the complexity of the problem. When you know the answer, constrain. When the model might know better, open up. When you know the constraints but not the solution, use structured openness.
4. Discovery Before Prioritization
The Observation
Traditional product and software development follows a familiar sequence: prioritize, specify, build, then discover whether what you built was right. The underlying assumption is that building is expensive—in time, coordination, and money—so you front-load decision-making to avoid waste.
But if building is cheap, that logic inverts. You can build before you fully understand what you want, because the cost of being wrong is low. The model shifts from prioritize-then-build to build-then-discover. And if discovery comes after building, prioritization belongs after discovery—not before it.
What Changes
This is not an argument against prioritization. The prioritization problem does not go away. But its character changes fundamentally:
- Instead of prioritizing what to build, you are prioritizing what you have already built—deciding what to keep, refine, or discard.
- The gate moves from intake (before you build) to retrospective (after you can see what you have).
- Planning overhead designed to prevent building the wrong thing becomes friction, because the cost of building the wrong thing is now low enough to absorb.
- The valuable planning that remains is ruthless curation: deciding what to invest in further given what you can now actually see.
Open Questions
This theme is the least resolved of the four, and intentionally so. Several implications remain unclear and worth continued exploration:
- How do teams measure progress and value when story points and sprint velocity no longer map to meaningful output?
- How do you maintain strategic coherence when anyone can build anything quickly? Does prioritization need to shift from sequencing to vision-setting?
- What does "done" mean in a world where you can always improve something cheaply?
- How does this change the role of a product manager, whose traditional value was in deciding what to build before anyone picked up a keyboard?
When building is cheap, discovery should precede prioritization—not follow it. Planning shifts from specification to curation. The question is no longer "what should we build?" but "of what we have built, what is worth investing in further?"
Where This Goes Next
These four themes are early observations, not conclusions. Each one points toward a deeper question that warrants its own exploration:
- The solo creator dynamic raises questions about craft, taste, and what good software development looks like when execution is no longer the constraint.
- The training density effect will evolve as models improve—but the principle of calibrating trust to domain familiarity will likely remain relevant.
- The prompt specificity question connects to a broader set of practices around how to work effectively with AI—an area where systematic learning is still nascent.
- The inversion of the build-discover-prioritize sequence may be the most significant implication for how teams and organizations are structured.
These are just early observations. I intend to explore each of them more deeply as I go.