The Limits of Vibe Coding: Why AI Still Doesn't Replace Senior Engineers
Software Engineering Is a Weighted Decision Tree
Let me tell you what I actually do all day. It’s not writing code. It’s making decisions, and more importantly, it’s assigning the right weight to those decisions before we’re locked in.
Every non-trivial system grows through branching choices: which storage model to use, where to put validation, how services communicate, how failures are contained, which abstractions will matter in six months. On the surface, many of these branches look fine when you pick them. The real question is what they cost you later: migration burden, operational overhead, latency, blast radius, the ability to change your mind.
A senior engineer doesn’t pick the branch that works today. They pick the branch with the lowest long-term regret. That weighting step is where most of the actual craft lives.
AI Traverses the Tree. Weighting It Is Still Your Problem.
The specific pattern I keep encountering: AI picks synchronous integration because it’s simpler to implement and the tests pass. Local result looks correct. What you don’t see until later is tighter deployment coupling, cascading failures under load, retry amplification, and an observability nightmare you didn’t budget for. The branch worked. The weight was wrong.
This isn’t a complaint about AI being bad at code. It’s an observation that the models don’t have a stable way to assign weight to future consequences. They optimize for what’s visible in the current scope. That’s not a flaw, it’s a fundamental constraint of how they reason.
Big Context Windows Don’t Solve This
I’ve heard the counterargument: just give the model the full codebase, the architecture docs, all the prior decisions. Large context windows are real, and they do help. But visibility isn’t the same as prioritization.
What I see in practice is that important constraints, especially the non-obvious ones, the ones that exist because of a painful incident two years ago, still get deprioritized in the presence of a cleaner technical solution. The model sees the constraint. It just doesn’t weight it heavily enough against the elegance of the path it’s building toward.
Recall improved. Judgment didn’t.
Some Decisions Deserve to Slow You Down
There’s a class of architectural choices that aren’t just hard to reverse, they’re expensive in ways that compound. Schema design. Tenancy boundaries. Consistency models. API contracts. Security assumptions baked into how services trust each other.
When I hit one of these nodes, I slow down. I ask more questions. I might push back on the whole framing. That instinct comes from having been wrong about these things before and understanding what that wrongness cost.
AI doesn’t slow down at these points. It keeps producing plausible output. Sometimes that’s exactly what you need: a fast, competent implementation of something well-understood. But at the risky nodes, the model’s confidence is the danger, not the solution.
What the Role Actually Looks Like Now
The senior engineer role hasn’t disappeared. It’s concentrated. I spend less time on boilerplate, refactoring, test generation, and routine debugging, AI handles a meaningful chunk of that now. What I spend more time on is defining constraints sharply enough that the AI can’t accidentally route around them, rejecting weak branches early before they get built, reviewing the trade-offs in generated code that the author (the model) couldn’t fully evaluate, and identifying hidden coupling before it becomes someone’s 2am incident.
The value of a senior engineer was never syntax. It was the early removal of costly mistakes. AI still struggles here, not because it’s unintelligent, but because that kind of judgment is grounded in consequences you’ve personally felt, in systems you’ve watched fail, in decisions you’ve had to live with.
One Honest Summary
AI is a genuinely useful amplifier. I rely on it. But it walks the decision tree, it doesn’t weight it. That part is still yours, and if you forget that, you’ll find out the hard way.