This came up at the end of the last office hours. How do you price a Generative AI product? Several key aspects of GenAI make pricing extremely difficult, but you don’t find out until you’ve made a few mistakes.
LLMs aren’t just expensive to train. Serving each request is expensive, and costs add up. Some GitHub Copilot users cost over $80 to serve, and the company lost money on most subscriptions.
Customers want stable pricing. Consumption-based pricing strategies have been a bust, while subscriptions have attracted the most adoption.
Customers want GenAI features but aren’t always willing to pay more for them. Many GenAI features improve existing functionality. The company incurs the costs of developing and serving but doesn’t generate incremental revenue.
In this article, I’ll explain a straightforward framework for LLM and GenAI pricing. It’s a combination of product architecture and innovative pricing models.
A Clear Path Forward On GenAI Product Pricing
Let’s start with a core realization that I and most companies working with LLMs have had. The unit economics of using LLMs for everything just don’t work. Unit economics means the cost of each unit, which, in this case, is the cost of using a frontier LLM to serve each request. This is what drives some companies to consumption-based pricing.
That pricing model solves the margin problem but creates a new one. Customers constantly watch their meter and usage plummets. The less customers use a product and the more anxiety that’s associated with each use, the faster customers churn. We need a hybrid approach.