Vikas Kansal, who leads Google's AI subscriptions business, published a framework this week explaining why traditional SaaS pricing collapses when applied to AI products. The piece appeared on Lenny's Newsletter and is the clearest articulation I have seen on the economic structure underneath the entire AI monetization problem.
The thesis is short. Every time a free user hits Enter on a frontier AI product, the GPUs fire and the cash burns. There is no way for the platform to subsidize unlimited free usage the way Slack, Notion, or Figma could during their growth phases. The pricing model has to match the cost structure. The SaaS freemium model does not.
Why The Old Playbook Stops Working
SaaS economics rest on a specific assumption. The marginal cost of serving one additional free user is essentially zero. Once the software is built and the infrastructure is provisioned, adding a tenth or a hundredth user costs almost nothing. This is why freemium worked. You let millions of free users in, you converted a small percentage, and the unit economics still cleared because the free tier did not actually cost anything to operate.
AI products do not have zero marginal cost. Every query fires inference compute that costs real money. Frontier models on premium hardware can cost between several cents and several dollars per complex query depending on context window, output length, and reasoning depth. Multiply that across millions of free users running tens of millions of queries per day, and the gross margin disappears before any paid conversion happens.
There is a second problem layered on top. The free tier itself often becomes capable enough to obviate the paid tier. Kansal points to Google's free AI tier exceeding what most users could produce themselves at writing code, drafting emails, and analyzing documents. When the free version is already better than the user's prior baseline, the upgrade pitch loses its bite. The product cannibalizes its own conversion funnel.
The Framework That Actually Works
Kansal proposes three pillars for pricing AI products that match the underlying cost structure.
The first pillar is usage intensity. Plus, Pro, and Ultra tiers that gate access to higher token limits, longer context windows, and faster response priority. Midjourney's Fast Mode versus Relax Mode is the canonical example. The free tier still exists, but it sits inside meaningful capacity constraints that route heavy users into paid plans. The economics work because the heaviest users are the ones costing the most compute.
The second pillar is outcomes. Pricing per resolved customer service ticket, per closed sales lead, per generated marketing asset, per hour of human work replaced. Intercom's Fin charges 99 cents per resolution. Sierra prices its customer service agents on outcomes rather than seats. The economic logic is that the platform takes a fraction of the value created, not a flat seat fee that has no relationship to the value generated.
The third pillar is compute-heavy modalities. Video generation, long-form audio synthesis, large agentic workflows. Restrict these to premium tiers because their cost structure makes them unsuitable for free access at scale. Sora, Veo, and equivalent video models live in this category. The premium tier is not arbitrary upsell pressure. It is the only economically viable place to host the modality.
The framework is internally consistent and matches what I see in the operating reality of every AI product team I talk to at difrnt. and inside the broader AI ecosystem. Teams that build AI products on SaaS seat-based pricing burn cash chasing usage growth they cannot afford. Teams that price on usage intensity, outcomes, or compute-heavy modality lines have a path to gross margin.
The implication for marketing and product leaders is concrete. If your team is launching an AI feature inside an existing SaaS product, the seat-based pricing model that funded the rest of your business will produce a margin disaster on the AI feature. Either price the AI feature separately on usage intensity or outcomes, or accept that you are subsidizing it from the SaaS line and budget for the cost of carry.
For founders building AI-native products from scratch, the question is not whether to copy the SaaS playbook. The question is which pillar of the new framework matches the specific value your product creates. Usage intensity for products that scale with query volume. Outcomes for products that replace defined work. Compute-heavy modality gating for products built on expensive primitives.
The pricing layer is one of the few places where business model design directly determines whether the product survives the next two years. The old playbook stops working at the moment of contact with AI cost structures. The new playbook is now public. The teams that adopt it first will keep the margin the teams that copy SaaS pricing will lose.
