When absurdity is mistaken for market prediction

Last week, I saw the following headline - THE 2028 GLOBAL INTELLIGENCE CRISIS with a subtitle ‘A Thought Exercise in Financial History, from the Future.’

Something felt off. This was followed by this news coverage:


Result? IBM closed down 13%, DoorDash 6% and other companies mentioned in the report fell 4% or more.

For IBM the drop was also correlated with the release of a blog post ‘How AI helps break the cost barrier to COBOL modernization’ by Anthropic, describing how its AI can help to modernize the COBOL code.

Nonsense and hyperbole. And still not as controversial as the Camel Beauty Show Scandal.

Let's start with the Citrini report.

It begins with a disclaimer “What follows is a scenario, not a prediction” and assurance that “This isn’t bear porn or AI doomer fan-fiction.” But rather it was a brainstorming session, a “Thought Exercise in Financial History, from the Future.”

The message from the future, June 2028, is “The unemployment rate printed 10.2% this morning, a 0.3% upside surprise. The market sold off 2% on the number, bringing the cumulative drawdown in the S&P to 38% from its October 2026 highs.” Among other things it was caused by “.. a negative feedback loop with no natural brake. The human intelligence displacement spiral.” All that because of AI.

Once we learn about the gloomy future from this brain session, we are served with the next section “How It Started.”

It starts with “In late 2025, agentic coding tools took a step (sic) function jump in capability” and “A competent developer working with Claude Code or Codex could now replicate the core functionality of a mid-market SaaS product in weeks. Not perfectly or with every edge case handled, but well enough that the CIO reviewing a $500k annual renewal started asking the question ‘what if we just built this ourselves?’”'

Right there, right at the beginning of the whole brainstorming session, you start with ambiguity and demonstrate how little you understand the topic.

We can have a spirited discussion about the step function jump in capabilities, but first define how or what we are measuring. One thing we don't measure is the quality of the output.

The next claim - “Claude Code or OpenAI Codex could now replicate the core functionality of a mid-market SaaS product in weeks. Not perfectly or with every edge case handled …” Have you ever written any product and delivered it to your customers? Did you know that the first thing your customers will do is to find every imperfection and every edge case? Have you ever seen the release notes of any 'mid-market SaaS product'?

Just with these statements, the Citrini Research people went from brainstorming to braingasm.

The next assertion got even better - “the CIO reviewing a $500k annual renewal started asking the question ‘what if we just built this ourselves?’

Did you know that every CIO when they wake up in the morning, their first thought is 'How can I build better software than our vendors can deliver?’ Do you really think that a 'competent developer' can, within a few weeks, deliver an app to the whole enterprise which can replace the current one?

The naïveté of this line of thinking is breathtaking. Ok, maybe not a few weeks, maybe in a few months. Maybe not one competent developer, maybe 3 developers. Before you know you are building a software development department with an annual cost of $1 million.

And it continues with other equally lunatic statements - “The procurement manager told him he’d been in conversations with OpenAI about having their ‘forward deployed engineers’ use AI tools to replace the vendor entirely. They renewed at a 30% discount. That was a good outcome, he said. The ‘long-tail of SaaS,’ like Monday.com, Zapier and Asana, had it much worse.

In the words of Marvin, it gives me a headache to think down to that level.

And based on these arguments, the Citrini braintrust built the rest of the “Thought Exercise in Financial History, from the Future.” For the economic analysis, please review The ‘extreme and improbable’ economics of Citrini’s AI report - Financial Times with a conclusion that “we should probably worry more about how the market is so jittery that a Substack can trigger a violent rout, than debunking the report itself."

Following the similar pattern, Anthropic’s post about modernizing COBOL starts with “Legacy code modernization stalled for years because understanding legacy code cost more than rewriting it. AI flips that equation.

Then we learn that “COBOL modernization differs fundamentally from typical legacy code refactoring. You aren’t just updating familiar code to use better patterns, you’re reverse engineering business logic from systems built when Nixon was president. You’re untangling dependencies that evolved over decades, and translating institutional knowledge that now exists only in the code itself.

Is the reason why it is different because Nixon is no longer president?

and

Modernizing a COBOL system once required armies of consultants spending years mapping workflows. This resulted in large timelines and high costs that few were willing to take on.

Ohh, muffin, only if life would be that simple. Just read lines of code to resurrect the institutional knowledge. You will know exactly about every decision which went into writing the code the way it is.

But once we deploy AI - “Tools like Claude Code can automate much of the exploration and analysis work described, giving your team the comprehensive understanding they need to plan and execute migrations confidently.” And “The economics of COBOL modernization have shifted. AI makes the economics work by automating what used to require armies of consultants, freeing your engineers to make the migration decisions that require their domain expertise.

Yes, no need for an army of now useless consultants and business analysts. AI gives the power and wings to your engineers.

But even if all these fantasies were true, why would that negatively affect IBM? Do the good people at Anthropic know that these systems run on mainframes? Are they aware of the level of reliability customers of these systems are used to? Have you considered the fact that a) IBM knows more about COBOL than anyone at Anthropic, b) it would be in the best interest of IBM to help customers improve the legacy code?

The recurrent pattern? Why think and check the facts when the fiction is so exciting?

Next
Next

How to bring your dying website to life