AI and its pricing conundrum
Time to focus on business. That's the message we get when reading “OpenAI to Cut Back on Side Projects in Push to ‘Nail’ Core Business” in The Wall Street Journal.
Maybe you recall the post “Panic in OpenAI land” when Mr. Altman declared 'code red' last year when Google released its latest version of Gemini. The feeling that ChatGPT is no longer the coolest kid on the block started slowly sinking in. And then there’s Anthropic, which - setting aside the AI welfare researcher nonsense - is focused on the software development community.
OpenAI had the luxury to be the first on the market with ChatGPT. It captured the imagination of millions and investor dollars in the billions. Despite all the money, time and resources it still didn't figure out what to do with this new, cool technology. Many attempts, many tests from a job platform, AI Search, AI Shopping, to erotica and advertising.
None of it can bring the required billions, and none of it can bring the billions fast enough.
The message from Fidji Simo, OpenAI’s CEO of applications is clear: “We really have to nail productivity in general and particularly productivity on the business front.”
It's not that OpenAI didn't try to go after the enterprise market before. The company hired sales people, built the team. But what exactly were they selling? Enterprise access to ChatGPT or the ChatGPT API access? Any other product?
And this is the problem for OpenAI (and other LLM companies) - what exactly are you selling and what is your business model?
Everyone is familiar with the pricing for ChatGPT.
You have your Free plan. According to OpenAI, ChatGPT reached 900 million weekly active users and also has 50 million paying subscribers. That's less than 6%.
Here is a good study about conversion rate for SaaS and AI companies and 6% is good but definitely not great. Next to the Free plan, you have Go, Plus and Pro where you get charged a subscription fee on a monthly basis.
The product description for each tier is something to be desired. For Free you get 'Limited messages and uploads', Go is 'More messages. More uploads', Plus is 'Expanded messages and uploads', while Pro is 'Unlimited GPT-5.4 and file uploads'. Easy choice. Imagine any other company providing this level of information.
What's more important, OpenAI is losing money on all these, especially on the most expensive one.
How is it possible that a technology which is portrayed as the replacement for humanity, the soon-to-be-super-intelligence, used by 900 million people on a weekly basis, can't attract paying customers? And when it does, the company is still losing money.
It would appear that OpenAI has two problems - existential problems - it doesn't have a product and it doesn't have a profitable business model. Despite that, OpenAI predicts it will be cash flow positive by 2029 or 2030. Are they going to make up the difference in volume?
To add to the confusion, last year the CFO of OpenAI mentioned a few interesting things - “I know if I had to get to break-even, I have a healthy enough margin structure that I could do that by pulling back on investment.” and “OpenAI’s enterprise sales, ... now account for roughly 40% of revenue, up from 30% at the beginning of the year.” (2025)
How could she claim that there is a “healthy enough margin structure” and lose money on every customer
Perhaps the healthy margin structure comes from the enterprise sale, specifically from selling the API access. OpenAI claims that it now generates $1 billion a month from it.
That's bad news for OpenAI. Providing API access to its models makes OpenAI one-of-many companies which have Google, Anthropic, Meta, xAI, AWS and China as direct competition. Not only is it easy to switch from one model to another one, but importantly, you switch from subscription model to pay-for-token/usage model.
Prices wildly vary from $0.035 to $150 per 1,000,000 tokens depending on the model. That presents another set of problems. For the seller, the pricing is tied to model capabilities and the cost of running the model. For the buyer it also creates unpredictability when it comes to usage. You have no idea how many tokens you will need for any particular task. You might need 2 million but when you slightly adjust your prompt, you might need more. How many more? It might be 'limited', 'more', 'expanded' or 'unlimited'.
If OpenAI goes after the enterprise market, they will encounter the word 'budget'. How can one establish a budget without understanding the usage?
The good news here is that OpenAI wouldn't need a product. The bad news is that it would compete on price in a commodity market where 'good enough' is good enough. You are suddenly competing with 'free' - Meta's Llama 3.2, Mistral AI models, Qwen series by Alibaba, and NVIDIA's Nemotron. These models enable data privacy, customization, and cost savings. Please note the last company - Nvidia. The maker of the most powerful AI chips.
The next business model is to price the output from ChatGPT based on value. The holy grail of pricing. Could OpenAI (and others) price their models based on the value which the output delivers?
That will start a very interesting conversation. For the buyer, the goal is to calculate the value it puts on an individual task. For the seller, it's about the ability to meet the price point, and most importantly, the seller will have to meet an explicit SLA - “A Service Level Agreement (SLA) is a documented contract between a service provider and a customer that defines the expected level of service, including quality, availability, and responsibilities.”
The quality will be the biggest problem of all. You are well aware of this disclaimer “ChatGPT can make mistakes. Check important info” and you might remember my last post about Anthropic using its own AI to try to find problems which its own AI created in the first place when writing software.
In order to achieve some level of predictable quality, the use cases will have to be very much constrained as outlined in this post “The cognitive dissonance of the Co-World.” If that is the case, OpenAI is even in bigger trouble. It would have to identify use cases where the product delivers billions in revenue to a market big enough to pay for it with minimum competition. If you know one, I am sure Mr. Altman from OpenAI is all ears.
The recurrent pattern? Almost every post I've written about OpenAI has ended highlighting the dismal state of OpenAI's strategy. Too many distractions, not enough focus. Its latest attempt to identify a product which would bring revenue and profitability is commendable. But I get the feeling there is not much OpenAI can do to reverse its fortune.