The Last Post-Mortem
There’s a specific kind of Stack Overflow post that tells you everything about how software knowledge used to work.
It starts at 2am. Someone is deeply stuck, the kind of stuck where you start questioning your career choices. They write it out, partly to ask for help, partly just to think. Sometimes they answer their own question before anyone responds. Sometimes the answer comes from a stranger in a different timezone who hit the exact same wall two years before.
That post stays. Gets indexed. Gets found. The pain was paid once and distributed to thousands.
That’s not how we learn anymore.
AI models are good at past failures because humans documented them obsessively. Every race condition, every memory leak, every misconfigured auth flow that took a team three days to find, someone wrote it down. Not for posterity. Out of exhaustion and the need to make sense of something that just cost them dearly. The models absorbed all of it. That’s where the competence comes from.
Now the agent handles the struggle. It hits the wall, recovers, produces something that works, and moves on. No post. No thread. No 2am confession to the internet. Stack Overflow question volume has been dropping since late 2022. The questions that used to become permanent knowledge artifacts are now answered in a chat window and forgotten.
The failure still happens though. That part didn’t change.
What changed is who’s in the room when it does. A smaller team, managing a codebase that was largely generated by systems they directed but didn’t fully author. The incident happens. Someone opens the code that failed and it’s technically theirs, it passed their review, but the reasoning behind it is opaque in a way that handwritten code rarely was. The post-mortem gets written because post-mortems are process. But it describes symptoms, not causes. The lesson that should have propagated doesn’t. The next team hits the same wall and the documentation that should have caught them simply doesn’t exist.
This is how a feedback loop dies. Not dramatically. Just quietly, incident by incident, with perfectly formatted post-mortems that explain everything except what would have actually helped.
And here’s the irony worth sitting with.
The models are competent precisely because the humans who came before them were not. They struggled, publicly, in writing, on platforms that indexed everything. That collective, uncoordinated act of documentation is the substrate the models were built on. And now those models are quietly discouraging the behavior that created them.
Not intentionally. But when the agent answers the question before it gets posted, the post doesn’t exist. When the junior developer never sits with the confusion long enough to write about it, the artifact never forms.
At some point the models will start making mistakes that nobody has written about yet. New failure modes, born from new architectures and new scales. And the community that would have caught those mistakes, that would have suffered through them and written about them at 2am, is smaller now and increasingly likely to hand the problem back to the agent that caused it.
The last post-mortem may have already been written. We just don’t know which one it was.