OpenAI Fires First In The Large Model Competitive Wars
What’s going on? Over the last few weeks, OpenAI has made several moves to develop a competitive moat around its models.
Why is it important? It’s a blueprint for differentiating model platforms when the models themselves cannot be a source of competitive advantage. Every business needs to develop a platform strategy, and this helps showcase one dimension of the competitive landscape.
Pricing Power
OpenAI dropped their prices this week for the second time in a year by launching a turbo version of GPT 3.5. Their low-cost model is less capable than their full-size models, but it’s functional for most use cases, and they’re recommending it when users don’t need best-in-class chat reliability.
They also announced the integration of ChatGPT and Whisper. Again, the price drops came into focus from the beginning. Whisper is their English speech recognition model, and they are combining the two, hoping that developers find new use cases.
They are using pricing power to drive competitors out of the market by making it too expensive to enter. OpenAI is leveraging model development and training costs as a moat and competitive advantage.
On one side, it challenges its margins, but they are making it up with scale. Cheaper to use means easier to adopt. It’s an intelligent path to rapidly gaining market share. This aligns well with what I teach in my strategy class about entering a market and rapidly scaling to seize as much of the opportunity as possible before fast-followers enter.
This is a new twist on that theme. OpenAI is leveraging pricing power to make entering their field less enticing and drive their current competitors into a race to the bottom before they are ready. Lower pricing makes their competitors’ path to profitability much harder.
Lower margins also leave competitors with less money to fuel their innovation cycles. OpenAI is forcing startup competitors to turn to a tight VC market to grow or improve their models. Less revenue and low margins will force valuations down. Each round will take more equity from the startup. It’s an excellent move for several reasons.
The more OpenAI can optimize its models, the better its margins will be on existing customer spending. Each workload moved to OpenAI’s platform represents an opportunity for consumption-based revenue. Optimizing the models opens the opportunity to increase margins.
Anthropic is focused on cost savings at a different phase of the model training lifecycle. They have introduced constitutional AI, which reduces the need for human feedback and labeling. That will drive their costs down in the long run and allow them to compete on more than hardware-based optimizations.
Overhead cost reductions can happen at multiple phases. Hardware and model optimizations are the most obvious and easiest. However, data acquisition and labeling costs could provide even more room for savings. Few companies will master this outside these early movers, so it will be a powerful moat. There are significant challenges to be solved around human-in-the-loop components and data labeling.
The knowledge and internal expertise being developed inside of OpenAI, Anthropic, and Hugging Face are also significant barriers. Learning how to solve the thousands of problems, big and small, that come with building large models is highly valuable. For the next 2-3 years, few companies will do the work necessary to duplicate that internal domain expertise.