The CEO of OpenAI says something for the history books

The year was 1874. Max Planck was told by his professor there was not much left to do within the realm of  theoretical physics. Fast-forward a bit to 1943 and Thomas Watson predicted that the world wouldn’t need more than 5 computers. More recently, in 1981 Bill Gates questioned the need for more than 640 kb of memory for your PC.

Now, in 2023, a few years into building Large Language Models, CEO of OpenAI, Sam Altman declares that there won't be GPT5, since building bigger and bigger models is not the way to go forward. There will be better ways to make the models perform better.

Who in their right mind would argue with Mr. Altman or Messrs. Watson or Gates? Their predictions were made in the context of available knowledge at that time and their vision of the future.

Training the latest GPT4 model cost more than $100 million. That would be about 666 P3 class servers from Amazon for one year, if you go from their price list (and I am sure Amazon would give you a few extra servers for this type of order). The point is that as much as $100 million sounds like a lot, when you translate it into the actual hardware, it doesn't appear gigantic at all. Also, this is an exercise. The above cost is not just raw computing power. You also have storage to account for, maybe some people assembling training data, etc.

Another perspective can be gleaned from this article in USA Today, showing a computer sold in 1972 by HP 3000 cost $95,000. Today, your microwave has more computing power.

Companies like Alphabet, Apple, Intel, Nvidia, AMD - the list is long - are racing to build better, faster, bigger processors to handle more calculations then today. That's a recurrent pattern which you can bet on. Every  year the underlying infrastructure will allow us to train bigger models. Will there be better models in the future? Of course! (hint: another recurrent pattern) But these will keep growing in size and scope and will make GPT4 look like a kindergarten toy.

Then there are new (and old) companies competing with OpenAI. It’s unlikely that just because Mr. Altman says so, they won't push for bigger and better. These companies are fast and furious in trying to out-compete OpenAI.

Where the current approach of growing the size of GPT is failing is the quality of data used for training and the fact that it is a Language Model, not a Knowledge Model. These are far more difficult questions to solve and brute force won't help.

The final recurrent pattern? Never make predictions based on your understanding of the current state of technology and its possibilities. The new breakthrough is always around the corner. You will always be wrong, like people who said there won't be another Fast & Furious movie.

Previous
Previous

Hydrogen. Another clean energy dead end?

Next
Next

ChatGPT, another step away from the truth