Field perspective · Part 2

Most “good engineering” was working around a problem AI just solved

For decades, software teams have invested heavily in techniques designed to make rigid code less expensive to change. That investment is now actively wrong for most projects — and the people paying for it usually have no idea.

5 min read

There is a familiar shape to a software project running late. The team is not slow. They are doing what good engineers do: layering abstractions, writing interfaces in case the implementation ever needs to swap, building configuration systems for flexibility the business has not yet asked for. This is the craft. This is what they were taught.

It is also, in 2026, mostly waste.

Not because those engineers are wrong about their craft. They are not. The techniques they are applying — object-oriented design, building for extensibility, pre-built reusability — were rational responses to a real problem. The problem was that code, once written, was expensive to change. So the industry developed elaborate ways to make change slightly less expensive: abstract the seams, hide the details, parameterise everything you might one day want to vary.

That trade made sense for a long time. It does not make sense in the same way anymore. And the change happened in the last six months.

What code is, and why we built around it

Code is rigid by design. You write a procedure, and it executes the same way every time. There are exactly two outcomes: the right answer, or an error. There is no third state. That rigidity is the feature, not a bug — it is why your bank balance is the same when you check it twice, and why payroll runs on the fifteenth without negotiation.

The cost of that rigidity is brittleness. Change a requirement, change the code. Pivot the business, recalibrate the procedure. So, sensibly, the industry built a discipline around making code less brittle: design patterns, dependency injection, plug-in architectures, in-house frameworks. None of these are about getting the work done. They are all about making the work cheaper to redo.

This was a reasonable investment when redoing the work was the expensive part. AI has made redoing the work cheap.

What’s no longer worth paying for

Three categories of engineering work used to pay for themselves and now generally do not.

Speculative extensibility. The interface with one implementation. The plug-in architecture with one plug-in. The configuration system that nobody configures. Each of these was a small bet that “we’ll be glad we built this when the requirement changes.” With AI, the requirement-change scenario is a few hours of work, not a few weeks. The bet stopped paying off. Edge cases exist — they are edges.

Boilerplate-driven design patterns. Factories, builders, abstract base classes, and the rest of the cast that exists primarily to manage construction and ceremony. These were workarounds for the tedium of writing object plumbing by hand. AI writes that plumbing in seconds, correctly, in any shape you ask for. Continuing to teach this as virtuous engineering is increasingly hard to defend.

Heavy upfront documentation as a substitute for legible code. The case for this used to be that complex code was unavoidable, so we’d document the design carefully for the future. The new calculus: write code clean enough that AI can explain it on demand to anyone who asks. Code is the source of truth; AI is the reader. Narrow exceptions remain — infrastructure that AI can’t easily inspect, for instance — and those exceptions are shrinking.

The reusability problem is different

One failure mode worth separating out, because it has a different cause: the heroic attempt at reusability across teams. The shared component library that nobody adopts. The internal platform that no other team trusts. The “common services” project that quietly dies after eighteen months.

This one has nothing to do with AI. It has been failing for the same reason the whole time: there is a well-known principle in software, sometimes called Conway’s Law, which observes that the technology you build ends up mirroring the way your organisation is structured. If two teams are organised separately, they will develop separate processes — and a “shared” component that crosses that organisational boundary is fighting reality.

Reusability works when the people benefiting from it are the people building it, and they are organised to collaborate. It does not work when one team builds something and hopes others will adopt it. AI does not change this. The cost of not reusing has dropped, which makes the failure mode louder, but the underlying issue is organisational, not technical.

Future-proofing exists because change used to be expensive. Change is now cheaper. So less future-proofing is needed — and what remains looks different.

What still matters

It is worth being clear about what hasn’t changed, so the argument doesn’t overreach.

Domain modelling — knowing what your business actually does — still requires human judgment, and AI cannot do it for you. Determinism still matters wherever errors are unacceptable: money, identity, compliance, audit trails. These need rigid, well-tested code, not AI in the loop. Architectural boundary decisions — what’s a service, where the database sits, what the contracts look like — still concentrate enormous leverage, because AI can rewrite a function in seconds but cannot easily migrate you off the wrong database. And product judgment — knowing what is worth building at all — is untouched.

Future-proofing has not gone away either, but there is less of it to do. It existed in the first place because change was expensive — every team built defences against requirements they could see coming. Now that change is cheaper, those defences cost more than they save. The future-proofing that remains is mostly about keeping things simple and legible, so that AI plus a human can rewrite cleanly when the future arrives. You are still future-proofing. You are just doing less of it, and doing it differently.

The new failure mode is fast

There is an obvious objection to all of this: if engineers move faster with AI, won’t they make more mistakes? Yes. They will.

But the mistakes are smaller, because the cycles are smaller. The old failure mode was a team spending two years building the wrong abstraction. The new failure mode is a team spending two weeks building the wrong abstraction — and then spending two weeks correcting it, rather than another two years. The mistakes become recoverable in a way they weren’t before. Fast wrong is followed by fast right.

This is a different shape of risk than most organisations have learned to manage. It rewards iteration and punishes commitment. It rewards leaders who can recognise a wrong turn quickly and pivot, and punishes those who insist on plans surviving contact with reality.

What this means for the people paying the bills

The practical question for anyone running a software-dependent business is no longer “are we using AI?” Most teams are. The question is whether the engineering investment underneath them still makes sense in a world where rewriting code is cheap.

Useful things to look at:

  • Where is the team building flexibility into the system that the business has never actually used?
  • Where is the codebase complex because it was written to be reusable, but only one team uses it?
  • Which abstractions exist to solve a problem that no longer exists at the same intensity?
  • Where is the team writing more code than they need to, because the patterns they were taught assume change is expensive?

None of these are easy to answer from inside the team that wrote the code. The patterns feel like good practice from where they’re standing. They were good practice. The world changed underneath them, and the change was recent enough that it does not yet feel real.

It is real. And the cost of not adjusting is paid every quarter, in budget that bought options nobody used.

The argument is not that AI replaces software engineering. It clearly does not. The argument is narrower and sharper: a specific category of engineering investment — the elaborate techniques built to compensate for code’s rigidity — has stopped paying for itself. Identifying which parts of a stack still need that investment, and which parts were over-built for a flexibility that never arrived, is now one of the more valuable things a team can do.

Of course, AI is not useful if it doesn’t know your business. That is a longer and more interesting conversation, and one for another time.

If you’re trying to work out where this applies to your own systems, let’s talk.