Gartner, you did it. Again. Sadly

In October, 2021, I was exposed to a technology prophecy provided by the good people at Gartner. It was the Gartner Insights for IT Executives. I made the mistake of reading it.

It covered 'The technology trends that will drive growth, scale digitalization and act as force multipliers over the next three to five years.' Among the trends were 'Hyperautomation', 'Total Experience' and 'Generative AI' which will generate '...  innovative new creations that are similar to the original but don’t repeat it.'

Suffice to say, I wasn't kind in my assessment. Still, I thought that this was just a one-off where the team at Gartner tried to use AI to write the report.

I was wrong.

Last week, I ended up with Gartner Business Quarterly, the Smart Planning for 2026 edition. One of the articles? 'Use Adaptive Strategic Planning to Succeed Amid Complex Volatility'.

But before I got to that piece, I started reading the foreword penned by the Chief of Research. That piece made a very, very strong argument for AI to replace people. To be clear, the Chief of Research was not actually arguing for that. It was the content itself. It was so bad that I can't comprehend how something like this could pass the editorial process at Gartner.

Let me introduce you to one of the paragraphs so you get the sense of the brilliance of the Chief of Research:

'When I look at business or technical trends, I often like to think of the cycle of entropy and work. Entropy is the natural state of the world, characterized by randomness and disorder. We apply effort to entropy to convert it into work, into a manageable, less chaotic state. But order always wants to revert to entropy, and we have to employ more sense making. Adaptive strategy works this way: We apply enough effort to create manageability within a certain time frame, but our task is never “done.” We do scenario planning to create optionality and resilience. We run simulations, perhaps aided by AI to introduce stochastic influences.'

I didn't even know where to start, so I started the hallucination machine aka ChatGPT to see what it 'thought' about it. (I didn’t want to burn my brain cells, I wanted to offload it to Nvidia GPUs.)

First I ask ChatGPT if it wrote this content.

ChatGPT wasn't sure. It thought that there might be another AI tool which wrote that.

Then I asked it to identify what was wrong with the content.

ChatGPT started with a generic statement: 'Here are several things that are conceptually off in that statement, mostly because it uses thermodynamic terms metaphorically but incorrectly.'

  • “Entropy is the natural state of the world” - In physics, entropy is not a state but a measure.

  • “We apply effort to entropy to convert it into work” - This is the most incorrect part. We do not apply effort “to entropy” to make work. Work is done by energy transfer.

  • “Order always wants to revert to entropy” - Misleading anthropomorphism. Order doesn’t “want” anything.


ChatGPT went on and on, but you get the idea. ChatGPT also offered to rewrite it. I actually thought it was much better. But when you start with a word salad, don't expect to get a steak masterpiece. Even AI has its limitations.

Perhaps you think that the foreword had some issues but the rest of the report was awesome… right?

In the words of Gartner: 'Wrong. Today’s uncertainty is new, more complex and multidimensional.' Gartner's vision to navigate this new uncertainty is with 'Adaptive strategic planning' because the old process depended 'on a level of certainty that no longer exists.'

To start on this journey from old to new you have to use the Three Actions framework:

  • AI Scanning - Use AI to continually scan for disruptive events.

  • Strategy Sprints - Use sprints to update strategic planning faster.

  • Increase Participants - Increase participants to make the enterprise more responsive to change.


And this gets us to the sad part of this week's story. The recurrent pattern with Gartner is to sound smart by producing content using big words. This nonsense is then - and these are the numbers provided by Gartner - shared with 78% of the Global 500 corporations (Gartner's clients).

Gartner has over 2,900 analysts which engage in 500,000 interactions with the end users every year. And there is the Chief of Research who underwrites that research with his musing about entropy. Thousands of professionals read this and are wondering why their efforts are not converting entropy to work ... Gartner must know better for sure. And that's why companies are paying astronomical fees: to fund Gartner’s ability to keep producing nonsense like this.

What's worse, this content either intentionally or unintentionally ends up as a piece of training content for the next version of LLMs. That's why we ended up with 'vegetative electron microscopy'. That's a big reason why we can never get rid of the disclaimer, 'ChatGPT can make mistakes. Check important info.'

The recurrent pattern here? It is so easy to blame technology for not working, but the real blame starts with companies like Gartner with its Chief of Research. Shame on them.

Next
Next

Is this SciFi enough?