Why do we think that software is different?

Do you remember the scene from Terminator 3 when Skynet was activated? The reason why it was activated was because it appeared that many systems were under attack and failing. The only chance to restore order was to enable software which was so complex that nobody knew how or why it was working.

That scene flashed in my mind when I was reading an article in TechCrunch - 'Anthropic launches code review tool to check flood of AI-generated code.’

Let me quote directly from the article:

“The rise of ‘vibe coding’ — using AI tools that take instructions given in plain language and quickly generate large amounts of code — has changed how developers work. While these tools have sped up development, they have also introduced new bugs, security risks, and poorly understood code.”

and

“‘We’ve seen a lot of growth in Claude Code, especially within the enterprise, and one of the questions that we keep getting from enterprise leaders is: Now that Claude Code is putting up a bunch of pull requests, how do I make sure that those get reviewed in an efficient manner?’ Cat Wu, Anthropic’s head of product.

Note: pull request is a notification that a software developer completed a code change, which is then ready to review, test and merge to the main branch of the code.

Now that the developers have at their disposal a technology, which can generate code both in speed and quantity in several orders of magnitudes higher - and this is important - while introducing more problems, how do we solve that problem?

Of course with more technology! A technology based on the same code as the technology producing the code in the first place!

One could ask a question: Would it be possible to use this new miracle while the new code is generated?

What will lead this to? An enormous amount of code generated by unreliable technology verified by the same technology. It will be impossible for any humans to validate, understand and fix any error introduced by the technology.

In Terminator, Skynet is portrayed as a self-conscious thing which takes over. What we are building here is sheer stupidity where we are intentionally building a system which nobody knows what it does or why.

The next line from the article is also telling:

“The AI explains its reasoning step by step, outlining what it thinks the issue is, why it might be problematic, and how it can potentially be fixed. The system will label the severity of issues using colors: red for highest severity, yellow for potential problems worth reviewing, and purple for issues tied to preexisting code or historical bugs.”

To translate to English: AI generates code, AI checks the code and identifies bugs and provides suggestions to humans for how to fix it. #chokingonirony

But another question came to my mind. Why is it that building anything else where we use the term 'engineering' - buildings, machinery, bridges, airplanes, etc. - is it subject to rigorous process with checks and controls; and personal responsibility? When we build software, we suddenly care only about volume and speed. Quality is never a measurement and over the years there is actually no expectation that any software will ever work. It is explicitly part of any software agreement which you never read, yet you always pressed the button 'I Agree'.

The recurrent pattern? This story perpetuates the myth that any problem with current technology can only be resolved with another, new technology. And that's how Skynet will become Skynet. Not some Super Intelligence with the aim to take over humanity. It will be a reflection of our stupidity.  

Next
Next

AI's final countdown for accountants