Mr. Altman’s singularity delusion

If you thought all the crazy news about AI was over, this week proved you wrong.

It started with the founders at Mechanize who declared that they are '... building artificial intelligence tools to automate white-collar jobs “as fast as possible".'and they believe that '... A.I. would eventually create "radical abundance" and wealth that could be redistributed to laid-off workers, in the form of a universal basic income that would allow them to maintain a high living standard.'

That turned out to be only the prelude to something singularly crazy that happened on June 11th. Let’s start with the headlines:


For your unfiltered reading, here is the original post itself. Read at your own risk. Similar to ChatGPT, it is full of fantasies and hallucinations that border on complete delusion.

Of course, the whole piece starts with 'We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.'

Another paragraph reads 'And yet, we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them. The least-likely part of the work is behind us; the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far.'

So we have a system, which is smarter than people in many ways.

For one of my projects, while testing different versions of a prompt, I uploaded a JSON file to OpenAI’s GPT-4-turbo, and posed a question.

Human: How many document id's are in the attached file?
ChatGPT: The attached file contains 5 objects, and each includes a unique "id". Therefore there are 5 document IDs in the file.
Human: there are more objects than 5
ChatGPT: The attached JSON file contains 50 items, and there are 50 unique document IDs.

Mr. Altman, is this what you call 'close to building digitalsuperintelligence'? Has your company really 'built systems that are smarter than people in many ways'?

Further down the text we learn that 'In some big sense, ChatGPT is already more powerful than any human who has ever lived.' I know that the usage of hyperbole should emphasize the monumental future ahead of humanity, but just referring to the example above, how big is the achievement, when ChatGPT - already more powerful than any human - can't tell the difference between 5 and 50?

Now that we know what we have, it is time to move to real predictions.

'2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same. 2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world.'

The year 2025 has seen many interesting things and Proof Of Concepts, but if these systems can't tell the difference between 5 and 50, then this statement is correct: 'writing computer code will never be the same'. #irony

And then we get exposed to text which would get ChatGPT into a state of cognitive crash:

'In the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes.'

and

'But in still-very-important-ways, the 2030s are likely going to be wildly different from any time that has come before.'

What? I am confused. Will I still love my family in 2030 but not swim in the lake?

When talking about AI, one can't help but mention robots. SuperIntelligence in the cloud needs its own legs.

We learn that 'If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different.'

Obviously.

All this sounds so good that it is hard to resist this awesome future.

But what will it mean to people like you and I? Not to worry, Mr. Altman has a plan.

'The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before. We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big.'

The bad news: you will lose your job and get poor. The good news: the world around you will get richer. But if you wait a few decades, it will be something big.

To cover all the bases in case things don’t happen as we imagine they will, we get cautioned.

'There are serious challenges to confront along with the huge upsides. We do need to solve the safety issues, technically and societally, but then it’s critically important to widely distribute access to superintelligence given the economic implications.'

First, we have to 'Solve the alignment problem, meaning that we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term.'

Second, we have to '... focus on making superintelligence cheap, widely available, and not too concentrated with any person, company, or country.'

This masterpiece is closed with few final thoughts:

  • 'We (the whole industry, not just OpenAI) are building a brain for the world. It will be extremely personalized and easy for everyone to use; we will be limited by good ideas.'

  • 'Intelligence too cheap to meter is well within grasp.'


There you have it. A prophecy by a CEO who is envisioning an apocalyptic future with superintelligence which somehow will obey some of us. The same CEO is running a startup with an incoherent strategy. A startup, which is losing money faster than it can raise it, selling a product which can't tell the difference between 5 and 50 with a disclaimer 'ChatGPT can make mistakes. Check important info.'

The recurrent pattern? None of these people ever consider that the same future they envision for others will apply to them with the same vengeance. Maybe the new adage will be 'like Saturn, AI devours its children'.

Next
Next

Millions of lines of old code