AI Won’t Eliminate Engineers
Our new book, Crisis Engineering, is about what actually happens when systems break under pressure, and how to fix them.
It comes out April 7, 2026 everywhere books are sold as a paperback, and audiobook (read by Cassandra Campbell).
For bulk orders below 500 copies, purchase at Porchlight. For bulk orders over 500 copies, please contact marina@layeraleph.com.
AI Won’t Eliminate Engineers. It Will Expose Your Weakest Systems.
As companies rush to adopt AI coding tools, they are pushing critical engineering risks out of sight and into the boardroom
Every day brings a new breathless headline saying that generative AI models have rendered software engineers obsolete. The logic sounds straightforward: an AI tool can write code, therefore humans no longer need to. For executives, the story is irresistible. Faster, better, cheaper automation has been overtaking professions for generations. Ask the blacksmiths, typesetters, and switchboard operators. Why not software engineers? In some ways, it seems even easier. Replacing someone who works at a computer should be simpler than replacing a plumber.
The catch is that no one's business actually consists of selling code. The era when code was written, copied onto disks, and sold in shrink-wrapped boxes ended decades ago. Today Microsoft, Salesforce and others are really in the business of running vast online services from their own data centers. For most companies and government agencies, software is simply one layer inside a much larger system of people, machines and processes that make, move or track real things.
Those systems have a tendency to grow until they are too complex for any single person to fully understand. Customers demand scale. Managers demand lower costs. Regulators add new requirements. Each pressure makes the system larger, more interconnected and more fragile. Engineers patch the problem of the day while deeper risks accumulate underneath. Nothing forces a reckoning until something breaks. The space shuttle explodes. A state unemployment system collapses. An airline suddenly can’t fly planes.
I saw an example of this recently when I watched a 53-foot truck get stuck in my driveway because “the system” told the driver my house has a loading dock. It doesn’t. The driver had no idea where the instruction came from or how to correct it. Somewhere inside the stack of logistics software that routes trucks and deliveries, a small error had propagated outward into the physical world. Now imagine trying to trace that mistake back to its source.
Incidents like that happen because the underlying system has become too opaque to understand.
Inside a system like this, software becomes the translation layer between a company's intentions and its actions. Ideally it captures complicated tasks that people are bad at performing reliably. Instead of every salesperson calculating taxes and tariffs on an invoice, the system does it once and correctly. But that only works if the software remains understandable and maintainable. When it becomes opaque, the organization loses its ability to reason about its own operations.
Most companies and agencies are already struggling with systems under pressure. Familiar symptoms are IT modernization projects years behind schedule, business processes that no one can change, and costs rising as vendors realize they cannot be replaced. When we dig into these environments we often find a dense layer of hidden risk buried in software systems that have become effectively unapproachable. If every line of code was written by humans, that task is already difficult. If much of the code was generated automatically by AI tools, it may be nearly impossible.
The risk is no longer hypothetical. Recent turmoil at Amazon serves as a warning. After months of pressuring engineers to adopt AI coding tools, demanding ever faster output, Amazon Web Services suffered a series of debilitating outages. Per reports, the company claimed the causes were "operator error, not AI," while quietly adding new review requirements for AI-generated code.
But a smaller number of engineers cannot realistically review a larger volume of lower-quality code unless they were mostly idle before. At Amazon, that seems unlikely.
Careless use of automated coding tools pushes difficult engineering problems into places where they are harder to see and harder to fix. The system may produce features faster, but it also accumulates new “unknown unknowns” that only reveal themselves under stress. The executives with a competitive advantage will be those that understand the difference between their business's accidental complexity, where it is safe to take easy gains, and essential complexity, where it is not. This has always been the core of what the most skilled engineers do. And the most successful companies of the past generation have been the ones that realize this is a core competency, not a cost center. Their systems are not the ones that bang out the most code per day. They are the ones that keep running when the unexpected happens.
Maybe job titles will change. Maybe net productivity will improve, or maybe it won't. But the future of AI isn't a simpler world that needs fewer engineers. It’s a world where the hardest engineering problems move up the org chart. Sooner or later, every board will discover that the reliability of its software systems isn’t a technical detail. It’s a business risk.
Mikey Dickerson is the founding administrator of the U.S. Digital Service, a partner at Layer Aleph, and is an author of the forthcoming book Crisis Engineering.