
GM. Welcome to the 2nd edition of Milk Road AI PRO where the AI narrative collides with physics, power, and profit margins
In report #1, we made the call that the bottleneck isn’t chips. It’s power.
This report picks up right there. At the moment AI stops being a cloud-only story and starts turning into a real-economy buildout.
The shift is simple:
Once models get good + fast + cheap, AI doesn’t just live in datacenters. It gets baked into devices, machines and eventually fleets.
And that’s the tell.
AI stops looking like “software adoption” and starts looking like an industrial upgrade cycle, showing up in shipments, installs, and new orders.
That’s when the winner list expands way beyond a handful of hyperscalers.
Here’s what we got for you today:
- Why hyperscalers might wobble in 2026
- Why 2026 will become the breakthrough year for real world AI
- The physical AI buildout timeline
- Where to place your bets
WHY HYPERSCALERS MIGHT WOBBLE IN 2026
We touched upon them in the first report, but here is a slightly deeper view, setting the basis for this report.
AI hyperscalers are spending huge money to build AI capacity today, but it can take 3-4 quarters before that capacity is sold.

That creates a timing gap: The cash goes out immediately for construction and equipment, and depreciation starts once the assets go live.
So, earnings get hit before the revenue ramp shows up.

(Depreciation means the value of an asset is declining over its life and this has to be accounted for in the company's profit and loss statement as an expense.)
So you get: spend + depreciation now, while revenue is still ramping.
On top of this, hyperscalers are for the first time ever taking on debt to finance this massive datacenter buildout, increasing the risk associated with the investments.

Finally, “Edge inference” may impact hyperscalers' AI-cloud business.
The core idea behind this is that once phones and laptops have “good enough” AI natively on the device, there will be less AI work needed from the cloud.
Big tech, especially Apple, is moving into that direction, building for on-device processing with harder tasks reserved for the cloud.

Those developments do not kill entire cloud demand, but they cap cloud inference growth and hyperscaler pricing power at the margin.
So to sum it up, hyperscalers’ multiples might wobble in 2026 because:
- CapEx + depreciation immediately drag on margins before AI revenue ramps
- Debt adds risk as it increases interest rate payments and adds leverage
- Edge AI devices start to dilute cloud demand and pricing power
WHY 2026 WILL BECOME THE BREAKTHROUGH YEAR FOR REAL WORLD AI
Now you might be scared and ask yourself, if AI hyperscalers struggle in 2026, will this have a structural impact on the broader AI landscape?
In my opinion this is not the case, as we are diffusing AI demand and applications into real world products and processes, accelerating in 2026.
Why do I think this acceleration starts in 2026?
It’s because a confluence of technical leaps in 2024-25 drastically improved the quality, speed, cost per task and deployability of AI models.
In plain English, AI is finally good and cheap enough to leave the lab and scale into the real world.
Let me break those 4 points down:
1. Quality of the models
New multimodal and action-capable models emerged that can do things earlier models couldn’t.
The latest reasoning models don’t just spit out text, they actually think.
For example, here are the ChatGPT 5.2 thinking results on the “Graduate‑Level Google‑Proof Q&A” (GPQA) benchmark.
The GPQA is a set of highly challenging multiple‑choice questions designed to test whether AI models can reason like an expert scientist rather than just search the web.

In addition to higher quality output, models gained vision.
Instead of just processing text, the latest AIs can understand images and video, which unlocks tasks beyond the screen.
This means an AI assistant can now analyze a photo, read a diagram, or navigate with a camera.
Critical capabilities for “real-world” use cases like controlling physical devices (robot arms, drones, home appliances).
2. Speed of the models (latency)
A big barrier to deploying AI widely has been the lag and unpredictability of response times, especially when everything runs in the cloud.

The chart shows that OpenAI’s GPT‑5.2 dramatically reduces response times compared with GPT‑5 and GPT‑5.1 across different tasks.
That step change was possible with better hardware and model optimization.
2025 was a banner year for hardware: Nvidia’s latest datacenter GPUs and new competitors (like AMD’s MI300, Google’s TPU v5) all pushed the envelope and critically, edge AI chips proliferated.
These chips are specialized to run AI inference faster and with far less power draw.
The result: tasks that used to take seconds (or needed a round trip to a datacenter) can now happen in a few milliseconds locally.
Most importantly, low latency is essential for physical world uses. A self-driving system or robot arm can’t wait 5 seconds for the cloud to respond.
3. Cost per task
This is perhaps the most eye-popping change.
The unit cost of AI inference (getting an answer or prediction) has absolutely plummeted in the last two years.

The Stanford analysis above found that for a model performing at GPT-3.5 level on a standard test, the cost per query fell from ~$20 per million characters in late 2022 to just ~$0.07 by late 2024.
That is a reduction of over 280× in just 24 months.
Multiple factors drove this: more efficient chips (better price/performance), algorithmic efficiency gains, and the ability to use smaller specialized models instead of one giant model for everything.
For businesses, the implications are huge:
AI features no longer automatically mean a hefty cloud bill and cheaper inference broadens where AI makes economic sense.
4. Deployability (at scale)
Deployability is the compound effect of the three shifts described:
- Quality makes AI reliable enough to trust in real workflows
- Low latency makes it feel native (not a slow cloud add-on)
- And low cost makes it economically viable to turn on for everyone
What this means is that AI is finally ready to be plugged into products and operations.
THE PHYSICAL AI BUILDOUT TIMELINE
Once AI is good, fast, and cheap enough to deploy everywhere, the story shifts from better models to the adoption of physical AI use cases.
That plays out in three waves:
- Cloud commissioning catch-up
- Edge upgrades (AI moves into devices)
- Embodied fleets (AI moves into machines)

Source: Gemini Nano-Banana
Wave 1 - Cloud commissioning catch-up
This is the “finish installing what we already bought” phase. Hyperscalers keep pouring concrete, racking GPUs, wiring power, and turning capacity on.
It’s not about new use cases, it’s about bringing ordered capacity online.
This is where the physical infrastructure bottleneck described in the last AI PRO report constrains the buildout.
Wave 2: Edge upgrades (AI moves into devices)
Once cloud capacity is energized and stable, the next unlock is simple: run more AI closer to the user.
Wave 2 is the “edge upgrade” cycle where inference shifts from “call the cloud” to “run it locally”.
That makes AI faster, cheaper, more private and easier to ship at scale.
What gets produced in Wave 2 isn’t “AI” in the abstract, it’s new hardware:
- AI PCs and laptops with on-device neural engines
- GenAI smartphones
- Enterprise edge stacks: small on-prem servers (on premise), gateways, sensors and software that keep data local (factories, retail, healthcare, call centers)

Gartner is predicting that in 2026, AI-capable devices flip from niche to default, showing a fast upgrade cycle is underway.
Wave 3: Embodied fleets (AI moves into machines)
This is where AI starts demanding physical CapEx, not just compute.
The shift is from “smart device” to “smart machine” and eventually to fleets of smart machines.
What gets produced in Wave 3:
- (Humanoid) robots
- Autonomous vehicles in ring-fenced environments first (mines, ports, campuses, etc.)
- Autonomous vehicles in broader robotaxi-style deployments
- Drone fleets for inspection, mapping, security, delivery
The adoption of those autonomous technologies will be massive.

Source: The Business Research Company
Most forecasts agree by giving ballpark estimates on where this is going and what the implications might be.

HERE IS THE INVESTMENT HYPOTHESIS
The key takeaway from mapping the expected AI rollout over the next few years is that this isn’t about a software adoption curve anymore.
It’s a compounding physical infrastructure boom across the real economy, where the beneficiaries are the companies shipping the stuff that makes AI usable in daily life.
What does that mean?
It means AI demand stops being concentrated in a handful of tech companies and the CapEx cycle broadens into physical asset procurement.
Aka millions of purchase decisions across consumers, enterprises, and industrial operators.
Investment hypothesis:
AI has shifted from “software + datacenters” to “power, cooling, and hardware for datacenters”, and as it moves into devices and machines, it expands into a much broader real-economy buildout.
Based on that, two core investment categories emerge:
1. Product winners
Those are the companies that turn AI from a cloud feature into something people use in real life.
Baked into devices, vehicles or robots, so AI adoption shows up in shipments, usage, and recurring revenue (not just datacenter spend).
2. PMI > 50 cycle
Once AI becomes a mass rollout of real equipment (not just software), demand shows up in manufacturing.
More new orders, bigger backlogs, and tighter delivery times, which is the setup that tends to push the PMI (Purchasing Managers Index) higher.
PMI higher (>50) means that the broader economy is healthy and the cyclical suppliers benefit as factories restart CapEx and restocking, creating broad earnings leverage.
WHERE TO PLACE YOUR BETS?
Product winners: Apple and the AI upgrade cycle
Everybody is saying Apple has missed the AI train.
From my perspective, Apple doesn’t need to win AI in the cloud, that’s not their business.
Apple wins if AI becomes a reason to upgrade iPhones/Macs and retail spends more inside the Apple ecosystem.

In order to achieve that, they are working on three core levers:
1. “Siri becomes an agent”
Even though they delayed the new Siri version into 2026, this is a major lever for them.
A version of Siri which has access to personal context and is able to take actions across apps is a very useful tool for users.
Siri already handles 1.5 billion requests per day. This number could grow exponentially if it becomes as useful as some of the leading LLMs.
2. “AI inside apps”
Apple opened up its on-device foundation model to 3rd party developers.
This means that they are basically offering an on-device AI brain, where apps can plug into it to add various features.
Because it runs on the phone/laptop, it can be faster, work even with bad internet, and feel more private than sending everything to the cloud.

3. Lower risk exposure
Apple’s strongest edge is that it can monetize AI through more useful products, driving higher ecosystem spend and services growth (e.g., Apple Services).
All this without taking hyperscaler-style capital spending and depreciation risk to build giant GPU fleets.
Instead, they partner with model providers (like Google’s Gemini) and pay ongoing usage fees (operating costs) rather than upfront building their own data centers (capital spending) to still offer users the latest AI tech.

In a world where investors may penalize heavy, volatile AI infrastructure spending, Apple’s AI revenue with a lighter CapEx profile could matter a lot.
Product winners: Tesla and the bet on embodied AI
Opposite to Apple, Tesla is a company favored by a lot of retail investors.
Tesla wins in AI if it becomes the mass-manufacturer of real-world robots that automate mobility and labor.
In order to achieve that, they are working on three core products:
1. Full-self-driving (FSD)
Tesla treats FSD as a “robot brain” project, turning driving into software that keeps improving via frequent updates, like iOS for cars.
The strategic bet is a general autonomy stack that can scale across roads without being hand-coded city by city.
If it works, every Tesla becomes a software-upgradable robot, shifting value from one-time car sales to higher-margin, repeatable software revenue.

FSD is already a real, safe and widely shipped driver-assist product which under active driver supervision works almost perfectly (as you can tell from the chart above).
Earlier this month, Tesla vehicles were spotted in Austin, Texas operating with no one inside the car. An important sign towards full adoption and trust in the technology.

2. Robotaxi service fleet
Robotaxi is Tesla’s shift from selling cars once to selling rides over and over. Turning each car into a money-making vehicle on a Tesla-run ride service.
In 2025, Tesla moved from concept to reality with a pilot in Austin.
By Q3-Q4 2026, the goal is scaling one city at a time and ramping up a purpose-built robotaxi vehicle, which is when Tesla expects autonomy to start showing up in their financials.
Adam Jonas, a well-known Morgan Stanley analyst, predicts that the fleet will grow to 1,000 vehicles by 2026, up from their current count of 50-150.
By 2035, he expects over 1 million Robotaxis deployed across multiple cities.

The massive opportunity of this technology and business model can be recognized in expected margins.
Currently, Tesla electric vehicles are sold at gross margins of ~15-20%.
By removing the human driver, a major bottleneck in terms of costs and car utilization can be removed, raising margins massively.

And because EVs are cheap to run, autonomy can push the cost per mile down, letting Tesla offer cheaper rides for users vs. existing mobility-companies such as Uber, which then pulls in market share for Tesla.
Also vs. other FSD companies, Tesla Cybercab is cheaper, mainly due to their advantages from vertical integration aka “Gigafactories” where they produce cars at scale already today.

In short, robotaxis are how Tesla turns FSD into recurring revenue, higher margins, and lower-cost mobility.
3. Humanoid robot: Tesla Optimus
Tesla wants to take the same real-world AI it built for FSD and package it into a general-purpose humanoid robot called Optimus.
Where cars automate mobility, Optimus targets the $50-60T global labor market.
However, this is still in the prototyping phase. It’s not a commercial product yet.
Musk said that he expects first lines of Optimus production at the end of 2026 with scaled production in 2027 and beyond.
A major advantage Tesla has vs. other humanoid companies is that they can test the product within their own production environment first.
Once they are satisfied with performance and solved key bottlenecks such as hand dexterity and building a supply chain for required products, they can deploy Optimus at industrial sites and eventually in private homes.
Deutsche Bank is calculating that by 2035, Tesla could sell ~1.25M bots. At a price of $25k per bot that puts their revenue at $31B, just from Optimus alone.
PMI >50 trade
Once Apple, Tesla, and many others make AI feel like a must-have upgrade in phones, laptops or cars, demand turns into real buying and manifests in the form of factory orders, bigger backlogs, and longer delivery times.
This would be a typical setup for the PMI to go above 50, showing that manufacturing is in expansion.
In that setup, the market tends to reward the real-economy suppliers whose revenues are tied to shipping and installing equipment.
Let me break those down (exemplary, not exhaustive list):
- Connectivity + networking infrastructure: moving more data, faster, across enterprises and edge environments.
- Memory + storage supply chain: the capacity layer that scales with inference and multimodal workloads.
- Fiber + optical interconnects: high-bandwidth links inside and between facilities.
- On-site / distributed power solutions: when firms want more control over uptime and power availability.
- Electronics manufacturing throughput: more chips, storage, packaging, and testing, plus the upstream materials and equipment that expand factory capacity.
- Industrial automation hardware: sensors, actuators, drives, power electronics, rugged edge computers.
Let me give you two examples showing how similar physical infrastructure cycles pushed the PMI higher in the past.
The car + highway era (~1960s): This wasn’t just about better cars.
It triggered a full buildout of highways, gas stations, repair shops, spare-parts supply chains, and mass auto manufacturing.

Smartphone / 3G & 4G upgrade cycles (2012-2019). This wasn’t just about apps.
Waves of spending on new devices, mobile chips, radio equipment, towers, fiber backhaul, and network upgrades as adoption scaled and each new generation forced another refresh.

Both of those physical infrastructure cycles drove the PMI strongly above 50.

Finding the winners in the PMI > 50 trade is not easy though.
Many large-caps tied to the data-center buildout (Wave 1) have already had a big run, so a lot of the “cloud infrastructure” upside looks priced-in.
Names like GE Vernova and Siemens Energy are already trading like the market’s chosen winners.
So the better hunting ground is often one step further down the stack, in the “boring-yet-essential” category of enablers that make the real-world rollout work.
The filter to find them is simple and repeatable: look for management commentary that shifts from “AI is a trend” to “we see it in orders”, then confirm it in the numbers.
Rising order intake, expanding backlog, longer lead times, and upward guidance revisions are the classic tells that the PMI cycle is turning.
A few examples that fit this one layer down setup that might be interesting:
- Cisco (CSCO): builds the “internet plumbing” for companies, so when more AI runs inside offices and devices, companies need faster networks to move more data.
- Micron (MU): makes memory chips, and AI needs a lot of memory to work smoothly, so demand can jump when AI use scales up.
- Corning (GLW): makes the glass and fiber cables that carry data, so more AI means more “data highways” getting built and upgraded.
- Bloom Energy (BE): provides on-site power systems, which help when companies want reliable electricity for AI equipment without depending only on the local grid.
Of course, those trades come with more risk, because these single equity names can be more cyclical, more volatile, and sometimes less liquid than mega-caps.
So if you want the theme with less single-name risk, a practical approach is using broader industrial and infrastructure ETFs that give diversified exposure across the buckets above.
THE REAL TRADE STARTS NOW
If you only remember one idea from this article, make it this:
AI is moving from datacenters and software to real products.
- Wave 1 was the cloud build and power up phase
- Wave 2 is the device upgrade cycle (smartphones, PCs, etc.)
- Wave 3 is when AI shows up as machines and fleets
That is why the opportunity set gets broader. It is not just a hyperscaler and software trade.
So what should you actually watch?
Think in buckets, not single stocks:
- Product leaders selling the AI upgrades people actually buy (smartphones, PCs, cars, robots).
- PMI-cycle suppliers that benefit when factories get flooded with orders (power gear, cooling, networking, memory, fiber).
And yes, 2026 might look choppy for hyperscalers as spend hits before revenue. But that sits on top of a much bigger AI buildout.
TLDR: No AI bubble!
Alright, that’s all I’ve got for you today, hope you found it valuable!
Respond to this to let me know your thoughts on this edition, and the new Milk Road PRO AI format in general! (@WhiteCollarExit).
I’m keen to hear what you think.
Catch you in the next one. 🫡





