Testing in the Age of AI: More Critical Than Ever

AI writes the code, so we need fewer developers. And if we need fewer developers, the thinking goes, we probably need fewer testers too. The software just…works.

That story is wrong. Not slightly wrong. Fundamentally and dangerously wrong.

AI is changing how software gets built. It is not changing the fact that software is used by humans, that humans make mistakes, that systems fail under pressure, and that every new line of code — whether written by a person or generated by a model — introduces new ways for things to go wrong. If anything, the rise of AI-generated software makes rigorous, human-led testing more important, not less.

AI Makes Testing Better — It Does Not Make It Optional

None of this is an argument against using AI in testing. AI is already making testing faster and more thorough. It can generate test data at scale, suggest regression suites, identify patterns in defect history, and reduce the mechanical burden on testers so they can focus on higher-order judgment. That is genuinely valuable.

What AI cannot do is replace the human responsibility for quality. It cannot sign off that software is ready for real users. It cannot tell you that the edge case your most experienced tester identified from intuition is the one that will cause an outage at 2am on a settlement date. It cannot be accountable for what ships.

People are. And people need structured, well-resourced, properly executed testing disciplines to fulfill that accountability — at the speed AI demands, and at the scale AI enables.


What I have learned over my 30-plus year career is that testing rarely gets the attention and priority it deserves. Though essential, it is the easiest step to skip or shortcut — and the most difficult to get right. At the enterprise level, testing thoroughly and consistently across systems on different platforms, with different business sponsors who have varying agendas and timelines is extremely difficult. It is a hard problem to get right. AI can and should help. But without proper discipline and approach, the speed and power of AI can and will exacerbate the challenge.

Whether we like it or not, AI is on our doorstep. It is here and it is here to stay. If we harness it correctly, the power and speed it gives us will be game-changing. The challenge will be keeping up with it — especially for large companies with large legacy systems that are integrated across the enterprise. Testing approach and discipline will be core to our success. We will get on board and master testing at an enterprise level, or we will fall behind our competitors.

Coming Soon…

This is the first in a series of posts about the discipline and approach required for successful enterprise testing in the context of our new AI development paradigm. The posts that follow dig into the specific areas where human testing judgment is irreplaceable: correctness verification, managing speed and volume, usability, edge case coverage, and load testing. They then build on those foundations with a practical framework for structuring test environments at the enterprise level.


Posted on agilish.net — Enterprise Testing