Check it out, man. So here’s the deal, every time you ask ChatGPT a question, its parent company OpenAI has to shell out 4 cents, which is around 29 cents in Chinese Yuan. Now, that might seem like chump change, but what if you multiply it by 13 million?
According to a report by UBS Group, ChatGPT had a whopping 100 million monthly active users in February, with around 13 million unique visitors every day. So, let’s simplify this for a sec. If each user asks just one question per day (and only one question), then ChatGPT’s daily operational cost would be a minimum of 520,000 USD. But hold on, an analyst from Insider estimated that it could actually be as high as 700,000 USD per day!
Now, here’s the kicker, man. Hardware expenses play a major role in the operational costs of AI companies. I mean, chips are vital for the tech industry, especially for AI companies. So, these companies are forced to bear the burden of exorbitant hardware costs. And if they want to break free from this situation, the only way to go is by developing their own chips, man.
Creating chips has become a crucial consideration for AI companies as they grow, my friend. According to Reuters, OpenAI is exploring the possibility of developing its own smart chip specifically designed for AI. And you know what the biggest advantage of this move is? It can effectively reduce AI operational costs, man. Taking Google, OpenAI’s biggest competitor in the AI field, as an example, they plan to remove Broadcom as an AI chip supplier by 2027. And that move alone could save Google billions of dollars every year, dude.
Besides cost reduction, another reason why OpenAI has decided to dive into chip development is to break free from dependence on other companies, man.
Right now, the AI chip market is dominated by Nvidia, man. They hold around 80% of the global AI chip market. But with the rapid growth of AI, the demand for chips by AI companies is skyrocketing. And Nvidia’s supply capacity is struggling to keep up with that demand, my friend. Even if Nvidia increases the production of their H100 chips to 1.5 to 2 million units by 2024, it won’t be enough to bridge the supply-demand gap. OpenAI CEO Sam Altman even publicly complained that Nvidia chip shortages have been slowing down the company’s expansion, man.
But let me tell you, “chip-making” is no easy task, dude. It requires both a significant investment of time and money. Alex White, the General Manager of SambaNova Systems, an AI company, said, “Designing and manufacturing chips is not something that can be done overnight. It requires a lot of expertise and increasingly scarce resources. OpenAI spent over five years developing GPT-4. If hardware takes a similar amount of time, I wouldn’t be surprised,” man.
Even with the dedication of manpower, financial resources, and time, there’s no guarantee that OpenAI will successfully develop a chip that meets their requirements. That’s the challenge of developing your own chip, man. So, the best solution to cut costs and increase the chances of success is to acquire an established chip company. And that’s exactly what OpenAI is considering, man. They are currently conducting due diligence on potential acquisition targets, but the specific companies involved are still unknown, my friend.
OpenAI is not the only one, though. Meta and Microsoft have also put years of effort into chip development, dude.
According to Reuters, Meta has developed custom chips, but they have had issues, causing them to abandon some AI chips. But Zuckerberg is committed to developing a newer chip that can support all types of AI under Meta’s umbrella, man.
On the other hand, Microsoft’s progress seems to be smoother, my friend. They have been secretly developing an AI chip codenamed Athena since 2019. Reports suggest that Microsoft plans to use this chip more extensively within Microsoft and OpenAI as early as next year. Microsoft insiders say that they don’t expect Athena to replace Nvidia chips in the short term. The main goal is to reduce dependence on Nvidia, dude.
That’s why giants like OpenAI and Microsoft don’t want to rely on others, man. Reducing dependence on other companies is one of their reasons for venturing into chip development. Reuters speculates that OpenAI’s consideration of developing its own chip is the latest sign of a split from its partner, Microsoft, man.
Now, let’s be real here, man. OpenAI’s chip development plan is still in the early stages, and it’ll be years before the market sees the fruits of their labor. Until then, OpenAI only has two choices: pay a hefty price for Nvidia chips or heavily rely on Microsoft. It’s definitely an interesting time in the AI hardware game, my friends. Keep your eyes peeled and stay tuned!