GM. Welcome back to Milk Road AI, the only inbox where we explain trillion-dollar tech wars before your coffee gets cold.
Here’s what we’ve got for you today:
- ✍️ Is memory finally breaking the cycle?
- 🎙️ The Milk Road AI Show: AI vs Humans: What Jobs Survive the Next 10 Years?
- 🍪 Netflix buys Ben Affleck’s AI studio.
Okara’s SEO & AI Visibility Agent helps your business rank higher on Google and get cited in AI answers by running daily audits for SEO/GEO visibility issues and recommending high-impact fixes that boost traffic and conversions. Try out Okara’s SEO & AI Visibility Agent today.

Prices as of 10:00 a.m. ET.

THE INVISIBLE BOTTLENECK OF THE AI ECONOMY
In October 1943, the U.S. Army Air Forces launched one of the most dangerous bombing raids of World War II.
The target wasn’t a submarine pen, a weapons factory, or an airfield.
It was a ball bearing plant in Schweinfurt, Germany, and 291 B-17 bombers flew deep into German territory.

60 were shot down, and over 600 airmen were killed, wounded, or went missing in a single afternoon.
Why? Because Allied intelligence had figured out something terrifying:
Every tank, plane, submarine, and vehicle in the German war machine needed ball bearings to function.
Without them, turrets couldn’t rotate, engines couldn’t turn, and factories couldn’t produce.
The Allies were willing to accept a 20% loss rate in a single afternoon just to hit the most boring, overlooked component in the German supply chain.
Because they understood something most people missed: the terrifying weapons got all the attention, but it was the invisible bottleneck that held the entire machine together.
The AI industry has the same bottleneck right now.
Everyone is obsessed with the GPUs, and NVIDIA’s latest chips and the trillion-dollar models.
But there’s one component that every single one of those systems depends on, and without it, the most expensive processor on Earth sits idle, doing absolutely nothing.
Memory chips.
And there is a company in Boise, Idaho, the last American manufacturer in a three-player global oligopoly that just posted the most explosive quarter in semiconductor history.
The stock is up nearly 328% in twelve months, with a market cap of $438B, and Wall Street still prices it like a commodity.
The company is Micron Technology (MU).
And the debate around it might be the single most important question in AI investing right now:
Is memory a commodity? Or is it the ball bearing of the AI revolution?
Quick side note: Two weeks ago, I published a PRO report calling Palantir a buy for our PRO members. Since then, the stock has been up roughly 20%.
The AI PRO subscription is just $20 a month.
You also get Discord access, where you’re welcome to come bully me directly instead of replying to this email.
So you can either spend $20 on the research or watch a stock move 20% and then wish you had.
Why the GPU is starving
For decades, the chip industry focused on one thing: making processors faster.
More cores, more transistors, higher clock speeds, and raw performance were everything, but AI flipped that equation.
The real problem isn’t just how fast the processor can think; it’s how fast you can feed it the data it needs.
Researchers at DEC saw this coming back in 1996.
They published a paper that essentially argues that we’re building faster and faster engines, but the fuel line is still a garden hose.
And they termed it the “memory wall.”
In simple terms, models are growing exponentially while memory is barely keeping up.

Thirty years later, his prediction is playing out in real time inside every AI data center on Earth.
During AI inference, which is when the model is actually generating responses, the GPU’s memory bandwidth utilization hits 92%.
Meanwhile, the compute utilization plummets to 15%.
In other words, the most expensive processor in the building is doing nothing 85% of the time.
And the solution is something called HBM, High Bandwidth Memory.
Instead of memory chips sitting on a board connected to the GPU through a regular data bus, HBM chips are stacked in layers and bonded directly next to the processor, creating data pipelines that move terabytes per second.
The analogy here is plumbing; a regular memory is a garden hose connected to a fire hydrant.
HBM is the fire hydrant itself, bolted directly to the engine, and once you give the engine that much fuel, it starts demanding a lot more of it.
Every generation of NVIDIA’s AI chips requires exponentially more memory:
- H100: 80 GB.
- Blackwell B200: 192 GB.
- Vera Rubin: 288 GB of HBM4.
- Rubin Ultra: ~1 TB per chip.
That’s a projected 12x increase in memory per GPU in just a few hardware generations.
But the supply side is where things get truly wild.
HBM manufacturing is brutally intensive.
The process involves drilling thousands of microscopic holes through layers of silicon to connect the stacked chips.
Yields are lower (fewer chips produced actually work), precision requirements are higher, and each HBM chip devours roughly 3x the wafer capacity of a conventional memory chip.
So every HBM chip that rolls off the line is essentially three regular chips that never get made.
That creates a supply squeeze that ripples across the entire memory market.
And only three companies on the planet can play this game.
But here’s what makes the current moment different from every previous memory cycle:
The demand isn’t coming from just one place anymore.
At CES 2026, Jensen Huang stood on stage and declared that the “ChatGPT moment for physical AI” had arrived, machines that don’t just process language but perceive, reason about, and act within the physical world.
NVIDIA showed a Mercedes navigating San Francisco traffic without a human driver, powered by a new autonomous driving model called Alpamayo.
They unveiled Jetson T4000 edge compute modules for robotics.
They launched Cosmos, a system that generates synthetic physics simulations to teach machines the rules of reality.
And every single one of these systems, the self-driving cars, the robots, the smart glasses, the AI agents, is monstrously memory-hungry.
A self-driving car generates up to 40 TB of data per hour.
A fleet of just 20 autonomous vehicles returning to a depot daily produces 2 petabytes of data.
Estimates show that the global autonomous vehicle market is valued at roughly $2.2T in 2030.
Humanoid robots need to simultaneously process vision, touch, audio, spatial awareness, and environmental mapping, all in real time, all on-device, with zero tolerance for cloud lag.
You can’t have a surgical robot buffer for two seconds mid-operation.
And all of these demand vectors are hitting simultaneously, all competing for the same constrained supply of DRAM (the main memory used in computers and AI systems) and HBM.
The consequences are already showing up in your wallet.
Memory prices surged 80–90% in Q1 2026 compared to Q4 2025, DRAM, NAND, and HBM all at record highs.

The PC market is forecast to shrink 11.3%, and budget phones under $150 are becoming economically unsustainable because the memory inside them literally costs too much.
Jensen Huang put it bluntly: “The quantity of memory required for AI to be effective is rising significantly. Memory is crucial for the future of AI.”
Elon Musk went even further, warning that limited memory chip availability could be “one of the most significant obstacles to future growth” and suggesting Tesla might need to build its own memory fab.
Wink wink, he’s already working on something called “Terafab”.
When the CEOs of NVIDIA and Tesla are both saying the same thing, you pay close attention.
Because the entire future of AI depends on just a handful of companies.
HOW TO GET YOUR WEBSITE MENTIONED BY CHATGPT & CLAUDE!
With vibe coding, shipping products has never been easier.
But growth and distribution still work the same way they always have.
That’s exactly what Okara’s SEO & AI Visibility Agent is designed to fix.
Here’s how simple it is:
- Go to Okara’s SEO & GEO agent
- Enter your website
That’s it.
You’ll daily receive:
- A detailed SEO audit
- 5 high potential recommendations to improve your SEO and GEO rankings
- Best ways to get mentioned by top AI platforms like ChatGPT, Claude, Grok etc.
Try out Okara’s SEO & AI Visibility Agent today.

THE INVISIBLE BOTTLENECK OF THE AI ECONOMY (P2)
The global memory market is a three-player oligopoly: Samsung and SK Hynix in South Korea, and Micron in Idaho.
Together, they control roughly 93% of all DRAM production on Earth.

Micron is the odd one out.
Samsung and SK Hynix are backed by South Korea’s national industrial policy and have been for decades.
We actually just saw how fragile this ecosystem can be.
South Korea’s KOSPI had one of its worst weeks since 2008 as tensions around a potential Iran war rattled global markets.
Samsung and SK Hynix both plunged more than 10% at the open, dragging the entire semiconductor sector lower.
Part of the panic wasn’t just geopolitics; it was the supply chain.
The conflict raised fears that a prolonged disruption in the Strait of Hormuz could choke off critical materials used in semiconductor manufacturing.
One of the biggest concerns is helium, which is used to cool semiconductor wafers.
Qatar produces roughly 38% of the world’s helium, and shutdowns at several facilities temporarily wiped out nearly a third of global supply.
Energy costs are another pressure point. Semiconductor fabs run massive cleanrooms and cooling systems 24/7, so when oil spikes above $100 a barrel, the cost structure of the entire industry starts shifting.
It’s important to note that Micron is meaningfully less exposed than its South Korean rivals to this specific crisis.
Its U.S.-based operations, cheaper renewable energy, and geographic diversification provide a structural buffer.
However, it isn’t immune.
Global helium shortages, sector-wide semiconductor selling, and its Asian manufacturing footprint still create vulnerability, as evidenced by Micron’s roughly 8% stock decline during the same period.
The KOSPI eventually bounced back, but the episode was a reminder:
Two of the three companies that control the world’s memory supply sit in a country uniquely exposed to energy and supply chain shocks.
Meanwhile, Micron holds roughly 21% HBM share.
Micron is the only major memory manufacturer based in the United States, which makes it strategically critical in a world where memory chips have become one of the most important inputs for AI infrastructure.
That’s why the U.S. government handed the company $6.2B in CHIPS Act funding.
And why it’s building a $100B megafab in Clay, New York, set to become the largest semiconductor manufacturing facility in American history.
But in the HBM race specifically, the Big Three are not equal.
Here’s where each one stands.
SK Hynix is the incumbent champion; they control roughly 62% HBM market share.
UBS analyst estimates that they locked up roughly 70% of NVIDIA’s Vera Rubin HBM orders.
In 2025, their operating profit surpassed Samsung’s entire company for the first time.
That’s the corporate equivalent of your younger sibling pulling up in a nicer car than yours on Christmas.
Samsung is the wounded giant on a comeback tour.
They fumbled early, reportedly failing NVIDIA’s HBM3E qualification tests. (Failing in front of Jensen Huang is basically failing your driver’s test while your entire high school watches.)
But they’ve regrouped, and in February 2026, they were the first to ship HBM4.
They’re using a more advanced manufacturing process than either competitor, and they’ve explicitly said their strategy is to make HBM cheaper than anyone.
Their revenue from HBM is expected to triple in 2026.
Micron is the smallest of the three, holding roughly 21% HBM share.
But they’ve carved out a niche as the efficiency leader, their chips use 30% less power than the competition.
As every data center on Earth slams into power constraints, that advantage keeps compounding.
They’ve also diversified beyond NVIDIA, shipping HBM to four major GPU and ASIC customers, including AMD.
The numbers that broke the model
Micron’s Q1 FY2026 wasn’t just good. It was the kind of quarter that makes analysts delete their spreadsheets and start over:
- Revenue: $13.6B (+57% YoY).
- Net income: $5.2B (+180%).
- Gross margin: 56.8%.
- Operating income: $6.4B (+182%).
- Free cash flow: $3.9B (record).
These are all great numbers, but the guidance is what made Wall Street lose its mind.
Q2 FY2026 outlook is $18.7B in revenue, and the Street was expecting $14.3B.
Gross margins are guided to 68%, and if Micron delivers on this guidance, a single quarter’s revenue would exceed the company’s entire fiscal year 2023 total.
Let me give you another number that might blow your mind.
In just 12 quarters, operating margins swung from -62.3% to +44.9%.

Let that sink in, but here’s the paradox that makes this stock so fascinating.
Despite all of this, Micron trades at roughly 12x forward earnings.
NVIDIA sits at 24x, AMD and Palantir north of 100x.
Micron is growing faster than most SaaS darlings and priced like it’s selling mattresses.
Why? Because the market has been burned by memory cycles so many times that it refuses to pay up, no matter what the numbers say.
The ghosts of cycles past
Investing in memory stocks is like dating someone who’s incredible on paper but has cheated on every partner they’ve ever had.
The numbers are perfect, the growth is explosive.
And your friends keep telling you: this time will be different.
It never is, or at least, it never has been.
Micron’s stock has suffered:
- An 88% decline in the 2008 financial crisis.
- An 82% decline in the dot-com bust.
- A 54% decline in the 2018 memory correction.
- 42% drawdowns during both COVID and the 2022 inflation shock.
In the last supercycle, 2017–2018, Micron’s trailing P/E fell to roughly 3x at the profit peak.
Not because profits were bad, but because investors had already priced in the collapse that was coming.
And they were right.
The pattern is always the same:
Sky-high profits trigger aggressive spending from all three producers, supply overwhelms demand, and the whole thing implodes.
So the entire investment thesis for Micron in 2026 comes down to six words:
Is the cycle actually, finally, broken?
If the answer is yes, Micron may be one of the most mispriced companies in the entire AI economy.
And if the answer is no, history says the collapse can be brutal.
Upgrade to PRO to access:
- The structural shift that could permanently change the memory cycle.
- Why HBM demand may keep tightening supply for years.
- The biggest risk that could send memory stocks crashing again.
- What Samsung, SK Hynix, and China mean for Micron’s future.
- My full verdict on whether Micron is a buy right now.

Summ (formerly Crypto Tax Calculator) is a tax software built specifically for crypto. Get started for free with Summ.
Mercuryo is the simple way to buy, sell, and spend crypto with low fees (yes, even with Apple Pay). Discover an easier way to use crypto.
Warbux is the easiest way to start trading crypto with zero deposits required. Get funded and start trading today!

BITE-SIZED COOKIES FOR THE ROAD 🍪
Alphabet awarded Google CEO Sundar Pichai a pay package worth up to $692M over three years. Most of the compensation is performance-based and tied to stock incentives.
Anthropic will challenge the Pentagon’s supply-chain risk label in court. The dispute stems from limits on military use of its AI.
Netflix is acquiring Ben Affleck’s AI filmmaking startup InterPositive. The company builds AI tools to help editors fix footage and post-production issues.
Tax season is just around the corner. If you’re not sure how to go about it, SUMM is a tax software built specifically for crypto.













