The ripple effect of AI coding on the rest of the organization
With the advent of modern AI workflows, your coding got 10x faster. Rest of your organization didn't.
Where the current reality breaks
The typical software project runs through a chain. The account manager talks to the client. Passes what they learned to a project manager. The PM writes specs. Developers build. A tester verifies. Eventually the result propagates back through the account manager back to the client. Rinse, repeat.
For decades this chain worked fine, because code production was the slowest stage of the whole thing. It set the pace for everything else. The tester had downtime between releases. The PM could take days to spec out the next feature. Everyone had breathing room because they were all waiting on developers anyway.
Then AI made developers fast. Like real fast.
Suddenly implementation was done before the PM finished writing the next spec. The tester was permanently behind — changes landed faster than they could verify. Developers finding bugs during builds couldn't file reports fast enough without slowing down their own work. The PM became the bottleneck just by doing their job at a normal human pace. And when the finished feature eventually got communicated to the customer, they were on sick leave. And the next feature was already waiting.
The pipeline as a whole didn't get faster. The choke point just moved outward to every other stage — each one originally designed to move at human coding speeds. I've written before about how AI tools haven't closed the experience gap for the person writing the code. This piece is about the rest of the chain around them.
What I ended up building instead
On a recent client project, I designed the whole process around this reality from the start.
There are no manually written specs. There is no manual tester either. The AI agents handle requirements elaboration, specs, implementation, testing, and bug filing. Agent goes through the original ask, evaluates if it has required information and asks what it can’t figure out by itself. When potential problems are found during development, they file bug reports themselves into the work queue. When new features surface edge cases, those go in too. The queue is alive — it grows and reprioritizes as the system works.
My role is at the two ends of the chain. On one end, I'm at the customer interface — talking to the client, understanding what they need, making sure we're solving the right problem and just passing that as raw data to the agent. On the other end, I'm verifying output — does what was shipped actually look right, feel right, smell right, and do we have correct tests in place for automated testing. Everything between those two points is AI-driven.
The cycle is fast. A customer request can go from conversation to deployed feature in hours, not weeks. But that speed only works because there's no handoff chain in between. No one is writing up specs for someone else to interpret. No one is waiting for a manual test pass. The system produces, verifies, and surfaces issues on its own. I point it in the right direction and check that the output makes sense. The pace is so fast that it started to rack up costs I haven't seen before — database, CI, build time, deployment consumption expenses. When the pipeline churns out that many changes, each one tested and deployed automatically, those per-run costs add up in ways that a slower process never exposed.
The work changes shape
Naturally this setup changes the human job. The work effectively moves up an abstraction level.
You stop thinking about implementation and start thinking about selection. Instead of "how do I build this," it's "should we build this at all." Instead of reviewing code line by line, you're evaluating whether the output feels right as a product. Instead of planning two-week sprints, you're doing continuous prioritization against a queue that's always moving.
Code is getting cheap to produce, which means there's a constant pull to just build everything. Customer asked for it? Build it. Agent flagged a potential improvement? Build it. But if you chase every opportunity you risk spinning your wheels forever — shipping things nobody needed, fixing things that don't matter. Deciding what not to build becomes as important as deciding what to build.
This is what happens every time a layer gets automated. The humans move up one level. It happened when we went from assembly to high-level languages. It happened when frameworks replaced boilerplate. Now it's happening again. The skills are different — less execution, more evaluation and direction — but it's the same pattern.
Every stage gets its own revolution
When it comes to AI, we solved coding first because it has a lot of high quality training data available. Additionally the tools already existed, the use case was clear, so it ended up being pretty cut and dry starting point. But look at the starting scenario for the rest of that chain.
The PM writing specs by hand. The tester running manual checklists. The account manager relaying feedback through email threads. Client communication that moves at calendar speed. Strategy decisions locked behind weekly meetings. Deployment processes designed for a cadence that no longer exists.
Each of these is now a constraint. And each one will eventually go through the same kind of transformation that coding is currently experiencing. Some of those transformations will be AI-driven. Some will be process redesign. Many will require custom tooling built for the specific organization, because the off-the-shelf products were designed for the old throughput. A CI/CD pipeline configured for ten deploys a month doesn't handle twenty deploys an hour. A project management setup built around two-week sprints doesn't map to continuous output.
When it comes to full production pipeline, we're at the very beginning of this. Coding was stage one. The rest of the pipeline is next — and right now, most of it is still running at human speed in a system that's already moved past it.
What stays human
It's not fully clear yet what each of these transformations will look like. But from building this way for a while now, I'm fairly confident about what stays human at the center of it. Three things:
Judgment — deciding what to build, when to pivot, and what to kill. The system can surface options and flag tradeoffs. The call is yours.
Taste — evaluating whether the output is actually good. AI can implement a design. It can't tell you if the design is right. It can't tell you if the interaction feels off or the user flow is confusing. That's still a human read.
Direction — pointing the system at the right problems in the right order. When everything can be built fast, choosing the sequence becomes the highest-leverage decision in the organization.
The tools will keep changing. The stages will keep transforming. But judgment, taste, and direction aren't going anywhere. (I believe this trio might have originally been coined by Every.to)