
GM. This is Milk Road AI PRO, separating the AI margins that last from the ones that already broke.
In Feb. 2024, Klarna told the market its AI assistant was handling two-thirds of customer service chats, cutting resolution time from 11 minutes to two, and would add $40M to 2024 profit. The cleanest AI margin story of the year.
Eighteen months later, the CEO admitted they had gone too far. Klarna is hiring humans back. The stock is down sharply since IPO.
That is the tension this report is built around. Not whether AI improves margins (it does) but which AI margins survive contact with real customers, real workers, and real demand.
I run a daily board-level view at a 10,000-person European company. The questions here are not abstract. They are showing up in real management meetings right now.
My thesis: the firms posting the cleanest-looking AI margins today (mass headcount cuts, narrow gain capture) are also the most exposed to the backlash already forming.
For investors, the question to ask is no longer "is this company adopting AI?" It is whether those margins are earned or borrowed, and whether the end-market can still spend if AI is also suppressing hiring and wages.
Here is how I think about it. 👇
AI is not optional
Public debate frames AI as a risk: jobs, instability, energy, concentrated power.
But for management teams, the first-order reality is simpler. AI is becoming a core productivity layer. The companies that hesitate fall behind on cost, quality, and innovation.
Two forces are driving this.
Agentic AI (since Openclaw shipped its free agent in Dec. 2025) is turning software from a passive tool into an active worker. Physical AI (humanoid robots) will extend the same logic into factories, warehouses, and logistics.
Put them together and the operating model of companies moves: more output, faster decisions, lower unit costs. Companies that deploy AI well will widen the gap on slower peers.
The real risk is unbalanced adoption
Once you accept AI adoption is inevitable, the fault line shifts. The real question is what happens when companies capture the gains too narrowly.
Inside firms, the benefits show up cleanly: lower unit costs, faster execution, better margins.
Outside firms, the costs show up: weaker labor demand, slower wage growth, rising insecurity.
One company swapping labor for AI is an efficiency program. Hundreds of companies doing it at once erodes the wage base that funds consumption.
If you think that argument does not hold up in the real economy, look at this:
When asked why people are worried about job security, AI is the second-most-cited reason.
And "my company's performance" arguably ties back to AI too. Anyone working at Salesforce or another seat-based SaaS company already feels what that means.
Firms privatize the gains (margins). Society absorbs the costs (no job growth). The negative feedback hits the firms later.
This reflects the central imbalance well. If that gap gets too wide, the result is predictable: political backlash, heavier regulation, weaker demand, and eventually a worse operating environment for companies themselves.
Again, this is already happening. For example, a couple of weeks ago, a 20-year-old U.S. citizen threw a Molotov cocktail at the private house of Sam Altman.
Now, the obvious pushback is that AI is strongly disinflationary, and that people will soon enough realize the benefits of that (especially lower-income and younger-aged buckets of society).
Over time, that may well be true. AI will make many goods and services dramatically cheaper, and the quality of life will increase for society. But even if that is the long-term destination, firms cannot assume deflation will arrive quickly enough to absorb the first wave of lost income and societal anxiety.
So, this is why AI adoption is not just a social risk. It is a future revenue and valuation risk for companies.
Profit alone stops working
The deeper issue is not that companies are behaving irrationally. It is that most are still being steered by a model built for a slower, pre-AI era.
That worked when output, wages, and externalities all moved at roughly the same speed.
AI breaks the synchronization. It lets firms raise productivity and cut labor intensity faster than wages, society, and the environment can adjust. It pulls forward the moment when externalities feed back into the business itself.
Profit is still the engine. But profit alone is no longer enough to steer by.
The people building this future are starting to say so themselves. Sam Altman for instance has argued GDP becomes a poor metric in a deflationary AI economy and that we should measure development by “quality of life” metrics.
Now, I would not build an investment thesis on the idea that money or GDP go away anytime soon.
But even AI's builders admit the old scoreboard weakens when AI makes everything cheaper. So, the new operating model has to be broader.
A 3-pillar operating model
The fix is not to slow AI down. It is to steer AI-driven growth with a wider scoreboard.
The new question companies have to answer: how do we win with AI in a way that stays economically, socially, and environmentally durable?
1. Profit remains the centre of gravity through the transition
A firm that cannot create economic value will not survive long enough to deliver any broader benefit. But profit can no longer be a stand-alone objective.
It has to be earned inside the two guardrails that protect the conditions making AI profit durable.
2. Social stability through shared upside
AI gains cannot flow only to shareholders and senior management while workers and consumers absorb the downside.
Near-term, that means re-skilling and redeployment.
The window for AI-fluent employees who can drive 5-10x output is open right now. It will not stay open forever. Once agentic systems mature, companies can fully automate roles that currently still require humans, and the social contract has to evolve.
Universal Basic Income alone is not the answer. Cash transfers cushion disruption but do not give people a stake in the upside or a sense of purpose. The stronger model is broader ownership of the AI capital stock, so people participate in the gains rather than just receiving support from the system that replaced them.
Why this matters for companies:
Social stability protects the demand side.
If too much of the upside is captured too narrowly, firms widen margins in the short run while weakening purchasing power, customer trust, and political legitimacy.
Two examples of the social guardrail already pricing in:
First, public backlash over AI data centres forced Microsoft to support utility rates that fully cover its own power use, and Anthropic to absorb 100% of grid-upgrade costs. Social pressure became a P&L line item.
Second, after OpenAI's Pentagon deal during the Iran crisis, ChatGPT app uninstalls jumped 295% day-over-day in the U.S., downloads dropped 13%, and Claude downloads picked up.
3. Environmental sustainability
AI feels digital but its footprint is physical: power, data centres, cooling, materials, infrastructure (Aka, where the AI trade actually makes money 🙂). If AI-driven growth scales without environmental discipline, firms solve one productivity problem and create a new constraint somewhere else.
Imagine fully autonomous robotic systems extracting raw materials at near-zero marginal cost. Send them into mines, oil rigs, arctic zones. Without a discipline on how much can be pulled before nature breaks, the floodgates open and global warming gets worse, faster.
Why this matters for companies:
Because sustainability protects the supply side of the business.
It helps preserve access to energy, infrastructure and materials, all required for the AI buildout. Environmental discipline is therefore not separate from growth. It increasingly determines whether AI-driven growth can scale.
So, profit remains the engine of our economy for now, but the social and environmental layers define the playing field on which profit can be made.
That is why this framework is not anti-growth. It is a way to make AI-driven growth more durable.
Who actually wins
The biggest winners will not be the companies that simply adopt AI. They will be the ones who turn adoption into an advantage that lasts.
Most firms will be able to tell a clean AI margin story over the next couple of years. Far fewer will hold up when pressure builds from workers, customers, regulators, or infrastructure constraints. That is where the separation begins.
Companies do not have to solve every AI externality on their own. Governments still own the big questions: e.g., how to tax in a world where labor is shrinking or how to educate when ChatGPT solves most problems at hand.
But companies control three things that matter most in the transition:
- How fast they adopt AI.
- How they handle workforce transition and gain-sharing.
- How seriously they build sustainability into the model.
Those three choices are strategic, and they decide which firms post durable margins.
What does this mean for investors?
The framework is not just a management lens, it is also an investor lens. The point is not to turn AI investing into a feel-good theme. The point is simpler: ignoring these guardrails can hurt your returns.
Let me break this down into 2 actionable pillars I derive as a retail investor from this.
1. Margin quality
Not all AI margin expansion is created equal.
Some margin gains are earned. They come from better processes, faster execution, higher quality, and lower complexity.
Others are borrowed. They come from cutting human capacity too aggressively and drawing down the social assets the business depends on: service quality, customer trust, employee know-how, and brand equity.
That is where the social guardrail starts to hit the P&L.
Klarna is the cleanest example.
In Feb. 2024, Klarna said its OpenAI-powered assistant was already handling two-thirds of customer-service chats, cutting average resolution time from 11 minutes to two minutes, and expected to drive a $40M profit improvement in 2024.
On paper, that looked like the perfect AI margin story.
But the story later became more complicated. Klarna’s CEO said the company had likely gone too far in using AI mainly for cost-cutting, and Reuters reported that the company shifted focus back toward service quality, product improvement, growth, and selective hiring.
Stock price since IPO:
That is the margin-quality lesson.
AI savings are not automatically high-quality earnings.
If they come from better operations, they can be durable. But if they come from cutting too far, too fast, they may only look good until customers, employees, or other stakeholders start to push back.
In other words: the social guardrail is not separate from the margin story.
2. Demand durability
The social guardrail also matters because customers need income before they can spend.
Most AI narratives focus on the supply side: productivity up, costs down. But as described earlier, AI also weakens the demand side if it slows hiring, pressures wages, or reduces consumer confidence.
PageGroup is a signal.
PageGroup is one of the world’s largest recruitment firms, so its business depends directly on companies hiring people. Reuters reported that AI-related uncertainty was contributing to delayed recruitment decisions and hiring freezes. Q1 gross profit fell 4.9%, and the stock initially dropped as much as 6%.
That shows the demand impact can start before mass layoffs. If companies pause hiring because they are unsure how many humans they need, employment-linked business models already feel the pressure.
And the same logic can spread further. AI agents are already attacking seat-based software models, because fewer human workers can mean fewer software seats.
Over time, weaker job growth and lower income confidence could also pressure consumer, discretionary, credit-sensitive, travel, retail, and restaurant businesses.
So investors should not only ask whether AI improves margins. They should also ask whether the company’s end-market can still grow if AI weakens hiring, wages, and consumer confidence.
Final takeaway
AI is forcing every company into the same trade. Move fast, or fall behind on cost, quality, and innovation.
But moving fast does not automatically mean winning. The winners are the firms whose AI margins survive contact with workers, customers, regulators, and supply chains.
Two checks before adding AI exposure:
- Margin quality: Are the gains coming from better operations, or from cuts that will need to be reversed (Klarna)?
- Demand durability: If AI is suppressing hiring and wages, can the end-market still buy what this company sells (PageGroup)?
For investors, that creates a simple lens: Do not just ask whether AI improves margins. Ask whether those margins can last.
Alright, that’s all I’ve got for you today.
If you have strong opinions here, I want to hear from you on X: @WhiteCollarExit
Keen to hear your view.
Catch you in the next one,
Vinc 🙂
AI-GENERATED PODCAST 🤖
We’ve turned this PRO report into an AI-generated podcast to make it even easier to digest. You'll find the audio player below. 👇️🎧️
The AI margin trap
Disclaimer: This podcast was created using AI and is based on the research report above. While we've done our best to ensure accuracy, the audio may contain minor errors, technical glitches, or mispronunciations. Please note that this podcast provides an overview of the report and is not a comprehensive or definitive take on the topic.




















