While the Economics Times headline about OpenAI—“Experts predict bankruptcy”—was certainly attention-grabbing, a deeper look reveals that the company has simply seen a drop in its users. This information and the high costs associated with training OpenAI models lead some to conclude that “OpenAI may go bankrupt by 2024.”
Should you fear ChatGPT, the service that OpenAI’s model powers, will disappear? No, quite the opposite. ChatGPT has rapidly evolved into a prominent consumer-centric service. No other application has grown at such a pace. It’s here to stay. Moreover, OpenAI is here to stay. Microsoft sees value in OpenAI as a tool to compete with Google in the AI race. They will likely continue to fund server costs for OpenAI to enjoy their reclaimed competitive edge against Google. (Many of us here in Silicon Valley had almost forgotten about Microsoft.)
While bankruptcy might be off the table, that news doesn’t surprise me considering the underlying business model of OpenAI. Selling Large Language models is not creating a business moat. As described by Investopedia, a business moat is the “ability to maintain competitive advantages over its competitors to protect its long-term profits and market share.” Computer chip manufacturing provides a good example. Establishing a 5nm chip technology costs billions (R&D of about $4-$5 billion and a production facility of about 40 billion). Those costs create a barrier to entry for others. Once established, a company can dominate that market segment for a considerable time. The company has a moat.
Generative AI or LLMs are expensive to train. Guido Appenzeller estimates that the “GPT-3 training cost range from $500,000 to $4.6 million, depending on hardware assumptions.” Those costs, however, don’t create a business moat:
(1) Training Costs Are Decreasing:
The costs of training an LLM depend on data and model parameters. As the growth rate of data used for training plateaus and software optimization improves, the cost of training these models is expected to decline. That means competitors to OpenAI will be able to compete with similar models but less R&D costs.
(2) Economic Value Isn’t Inherent To Algorithms:
Having trained an LLM does not mean there is economic value created. At its essence, an LLM is a machine learning algorithm designed for sequence-to-sequence predictions. It’s a new tool next to the many other machine-learning tools we have today. There are use cases where an LLM is uniquely positioned to be helpful, for example, for text generation. I could have this paragraph written as a poem. What is the value of that?
In the realm of tech, where knowledge does accrue,
Training an LLM doesn’t always add value.
It’s essence, you see, is quite plain to deduce,
For sequence-to-sequence, it’s the tool we produce.
Amongst many tools that in AI do lay,
LLM stands out in its own unique way.
For generating text, it’s a star in the night,
Crafting words and tales, bringing ideas to light.
Yet, this paragraph, once prose, now rhyme,
Transformed by LLM in such little time.
But what value lies in this poetic act?
Is it mere novelty, or a profound impact?
Okay… point made.
(3) Economic Value Is Based On The Application Alone
As I wrote in my book, Ask-Measure-Learn algorithms become valuable when they influence actionable outcomes. For example, through our predictions at Google or Marpai Health, we aimed to impact avoidable hospitalization. The algorithm alone is often important but not sufficient for value creation. The important part is the integration of the algorithm into the product. We knew this since Jeff Hammerbacher and DJ Patil coined the term “data science.” This insight has stayed the same also for the new hype of Large Language Models.
The main value creation we see from OpenAI models is (a) ChatGPT, (b) the various Co-Pilots, and (c) the OpenAI API integration with Microsoft. It’s early days to talk about the financial success of Co-Pilot and the APIs. However, for ChatGPT, rumors say that the costs of running ChatGPT surpass its revenues. Therefore, ChatGPT seemed to have scaled back its resources, leading to a decline in ChatGPT’s ‘smartness.’
No Model Moat But 100 M Users Moat
With decreasing costs and the pivotal role of application use cases, investing in Large Language Models is challenging to safeguard. The market is flooded with new algorithms regularly, with open-source versions offering viable alternatives to OpenAI proprietary models. This issue became clear with Google’s statement: “We Have No Moat, And Neither Does OpenAI.” Meta open-sourcing LLaMA, a 65-billion-parameter large language model, essentially only finalized this setup.
In conclusion, yes OpenAI business model should give us pause. But remember, ChatGPT currently has over 100 million users. There is much one can do to create a moat and sustainable advantage: User Experience, Data Access, Workflow Integration, LLM governance, and regulatory protection. Read more about those in “The Potential Moat of OpenAI – How OpenAI Can Secure Its Investments.”
Lutz Finger has built data products for LinkedIn, Google, Snap, and Marpai Health. He teaches “Designing Data Products” and AI Strategy at Cornell’s Johnson School of Business. Views are his own.
Follow me on LinkedIn.
Follow me on Twitter or LinkedIn. Check out some of my other work here.