GM. This is Milk Road AI, where we track who's funding, building, and owning the AI stack.
Here’s what we’ve got for you today:
- ✍️ The overlooked layer of AI infrastructure that's quietly printing money.
- 🎙️ The Milk Road AI Show: AI Will Run the Economy… Here’s What to Buy Before It Happens (Full Portfolio Reveal).
- 🍪 Elon is building his own chip factory.
Shareland enables users to trade real estate like stocks with direct access to US housing markets. Trade real estate just like stocks with Shareland.

Prices as of 10:00 a.m. ET.

THE INVISIBLE BACKBONE OF THE AI GOLD RUSH
In Formula 1, the entire world watches the driver.
Lewis Hamilton, Max Verstappen, the 5G camera angles, and the catastrophic pit stop drama that keeps everyone on edge.
But nobody watches the tires.
But Pirelli, the Italian tire company, has been at every Grand Prix for over a decade as the exclusive tire supplier to F1.
They don’t need to win the race, wear a helmet, or land a sponsorship deal to matter.
They just need to make sure that not a single car on the entire grid can race without them.
That is exactly what is happening right now in AI infrastructure.

Everyone is watching the GPUs, the chips, the trillion-dollar data centers, and Jensen Huang’s leather jacket collection.
But nobody is watching the fiber.
While the AI world obsesses over semiconductors, the real constraint is something far less visible.
It doesn’t have a keynote, a celebrity CEO, or any hype but everything depends on it.
It has lasers, fiber, and tiny modules roughly the size of a pack of gum, except these don’t freshen your breath, they move the internet.
And at GTC 2026 this week, Jensen just handed you the clearest roadmap you are ever going to get for why this boring, overlooked corner of the market might be the most important trade of the next two years.
So, what the heck is an optical transceiver?
A GPU is completely useless alone.
When you train an AI model at scale, you are doing it across thousands of chips simultaneously, all running in lockstep.
Those GPUs need to communicate constantly, every millisecond, bidirectionally, in sync, without a single dropped packet.
They can’t communicate through air, and passive copper isn’t viable either, maxing out at just 2–3 meters at 800G and under 2 meters at 1.6T speeds.
(I know. Embarrassing for copper, it had a good run.)
So they communicate through light.
Optical transceivers are the tiny, gum-stick-sized modules, one at each end of a fiber-optic cable that convert electrical signals into pulses of light, shoot them down the fiber, and convert them back on the other side.

Sounds simple, but it is not that simple.
A single AI training rack with 16 H100 GPUs generates over 400 gigabits per second of constant, relentless data traffic:
- Blackwell rack: over 12 terabits per second.
- Vera Rubin system: 260 terabytes per second of all-to-all bandwidth.
To put that in perspective, Jensen revealed at CES in January and confirmed again at GTC last week that the Vera Rubin system delivers all-to-all bandwidth he described as “more bandwidth than the entire internet”.
To support that level of bandwidth, every single bit of data has to travel as light, through transceivers, down fiber, into more transceivers, and back into chips.
And here is the kicker that almost everyone misses: AI data centers require approximately 36 times more fiber than a traditional CPU-based system.
This is the circulatory system of a $1T+ buildout.
And the companies selling the blood vessels have been hiding in plain sight on public exchanges, growing 80–200% per year while the rest of the world stares at NVDA’s chart.
THE SIGNAL YOU CANNOT IGNORE
Here is where it gets genuinely interesting.
Two weeks before his GTC 2026 keynote, Jensen Huang wrote two $2B checks.
One to Coherent Corp COHR, the other to Lumentum LITE, both optical component manufacturers that make lasers and fiber-optic transceivers.
In 2023-2024, the critical bottleneck for AI infrastructure wasn't the GPU itself; it was CoWoS, the advanced packaging technology used to stack HBM memory onto the chip.
TSMC had a near-monopoly on the process, supply was constrained, and anyone who hadn’t secured CoWoS capacity early found themselves waiting months for GPU deliveries.
NVIDIA just looked at their 2026-2027 roadmap, identified the next CoWoS, and went and bought the supply chain before the shortage hits.
That component is InP laser fabrication, the indium phosphide-based laser chips inside high-speed optical transceivers, which is heading toward a critical supply crunch as 800 gigabits per second demand explodes and 1.6 terabits per second ramps simultaneously.
Coherent's stock jumped 15% the day the deal was announced, while Lumentum's jumped 12%.
Both companies received not just equity investment but multi-year purchase commitments, billions of additional dollars in guaranteed orders on top of the cash.
But to understand why the money went to optics specifically, you need to understand what Jensen is actually building underneath all of it.
Right now, optical transceivers are pluggable.
You plug them into the outside of a switch, like a USB stick. They work great, but they burn power as a teenager burns through a phone battery.
A 128-port switch running current pluggable transceivers burns over 2,000 watts just in the optics layer before a single GPU is powered on.
At a 200,000-GPU cluster scale, where the major hyperscalers operate today, the optical layer alone consumes 17 megawatts.
And that’s enough electricity to power roughly 12,700 average American homes, just converting signals to light and back.
Co-Packaged Optics (CPO) fixes this by fusing optical engines directly onto the same substrate as the switch chip, eliminating signal loss across copper traces.
Power drops 65–73% per port, which adds up to 15–17 megawatts saved at the cluster scale.
Jensen confirmed at GTC that NVIDIA's Spectrum-6 SPX, the NVIDIA’s first CPO switch, is now in production.
And the Feynman platform in 2028 takes it further: optical NVLink for the first time in NVIDIA's history, expanding GPU compute domains from 72 chips to 576 to 1,152, all connected by light rather than copper.
For the entire history of NVIDIA's GPU interconnect, scale-up has been copper and always copper, but Feynman ends that.
The $4B was essentially NVIDIA locking in its Pirelli before the season even started, before anyone else on the grid realized the tire compound had changed.
AI-POWERED REAL ESTATE AGENT
Wouldn’t it be cool if you could buy a slice of a New York home in the morning and then sell it in the evening?
That’s exactly what Shareland is building.
Tycoon is Shareland’s AI-powered real estate agent that trades real estate just like stocks.
Here are their key USPs:
- Trade real estate without owning the property
- Use their AI-powered agent to inform your investment strategy
- Tracks housing markets at both the city and neighbourhood level
And you don’t need large capital for this, you can start with as little as $1.
Trade real estate just like stocks with Shareland.

THE INVISIBLE BACKBONE OF THE AI GOLD RUSH (P2)
Let's talk about the market, because the numbers have gotten genuinely offensive.
The global optical transceiver market grew from $12.6B in 2024 to an estimated $18–23B in 2025, roughly 50–80% growth in a single year.
It's projected to clear $25B in 2026, with the AI-specific segment doubling from $5B to $10B in that same window.
And that's before Jensen's new demand forecast enters the equation.
At GTC this week, Huang doubled his AI infrastructure demand projection from $500B to $1T through the end of 2027.

If optical networking captures even a conservative 5% of that spend, historically it runs 3-5% of total data center capex, and AI's bandwidth intensity is actively pushing that ratio higher, you're looking at $50B in cumulative optical demand through next year.
Most analyst models are projecting closer to $25-30B.
Either Jensen is wrong about the trillion, or the models are underestimating by a factor of two.
One of those has historically proven to be the better bet. Now for the part everyone actually wants. Who wins?
Coherent (COHR) and Lumentum (LITE) are the most obvious beneficiaries.
Multi-year purchase commitments, new fab funding, and direct supply chain integration into NVIDIA’s CPO roadmap.
LITE's CEO already said they are sold out through the end of 2027 before the new Greensboro, North Carolina fab is even ramped.
The company is targeting $8B in annualized revenue within 18–24 months.
They did $1.65B in their last full fiscal year, their current revenue run rate is already above $2B, and accelerating.

That is a 5x in less than two years, either they are completely delusional, or they know something the market doesn't.
The $2B NVIDIA check suggests the latter.
And it’s not just NVIDIA making aggressive bets here, Applied Optoelectronics AAOI just closed a $4B 800G deal with Amazon and landed a $200M+ order for 1.6T transceivers in March.
Management is targeting $1B in 2026 revenue from a $455M base.
High risk but very high reward if they execute.
If AAOI is the high-beta bet, Credo Technology CRDO is the outlier, with revenue growth now pushing past 200% at 68.5% gross margins, while almost nobody in the mainstream AI investing conversation is discussing it.

Credo makes active electrical cables that fill the gap between expensive optical fiber and insufficient passive copper for the 3-7 meter links inside AI clusters, the "Goldilocks zone" that represents a majority of connections by count in a large cluster.
The AEC (Active Electrical Cable) market is projected to grow from $1.2B in 2025 to over $7B by 2030.
Credo is targeting 75% market share, making it the picks-and-shovels play within the picks-and-shovels play.
And if Credo connects the system, Marvell MRVL powers it, designing the DSP chips that sit inside virtually every high-speed transceiver from virtually every manufacturer.
The DSP handles error correction, signal equalization, and encoding, and accounts for roughly 50% of total module power.
Their Nova chip was the first to enable 1.6T data rates. FY2028 revenue target is $15B, from $8.2B today.
The one giant asterisk
Here is the uncomfortable thing sitting in the middle of all of this.
There is a company called InnoLight you have almost certainly never heard of, and it controls over 50% of NVIDIA's current 800G optical transceiver wallet share.
Chinese manufacturers collectively control 40-50% of all global transceiver shipments.
They are cheaper, faster to scale, and operating at a level that Western competitors genuinely cannot match right now.
Eoptolink, another Chinese transceiver maker you've probably never heard of, grew revenue by 179% in 2024 and jumped from #7 to #3 in global market share rankings in a single year.
These are not scrappy underdogs making commodity products but rather world-class engineering organizations with government backing and manufacturing velocity that Western competitors are genuinely struggling to match.
That is exactly why the NVIDIA-Coherent-Lumentum deal reads less like a strategic investment and more like a geopolitical insurance policy.
NVIDIA is building a U.S.-based supply chain for the most critical non-GPU component in AI infrastructure.
Coherent's new InP wafer fab is in Sherman, Texas, while Lumentum's new laser fab is in Greensboro, North Carolina.
Both are funded with NVIDIA capital, and both are timed precisely for the CPO transition, where InP laser supply becomes the next binding constraint.
Jensen is doing to optical transceivers what the CHIPS Act tried to do for semiconductor fabrication: building domestic capacity before someone decides to make access to the foreign supply strategically complicated.
This is either visionary supply chain thinking or very expensive paranoia.
Given that the guy accurately called the GPU shortage two years before it happened, I know which way I'm leaning.
The verdict
Everyone in the investment world right now is tripping over each other to buy more NVIDIA.
And fine, NVIDIA is great, nobody is arguing. I have a chunk of my portfolio in NVDA, the roadmap is extraordinary, and the leather jacket is iconic.
But Jensen Huang just wrote $4B in checks to optical component companies and publicly announced a $1T AI infrastructure buildout through the end of next year.
Maybe, just maybe, it's worth spending five minutes thinking about what those checks are actually buying.
The AI supercycle runs on GPUs, GPUs run on light, and light runs on transceivers.
Without transceivers, there are no training runs, without training runs, there is no ChatGPT, and without that, there is no $1T.
You don’t need to pick the winning driver, you just need to own the tires.
So the real question is, out of all of these, which ones am I actually buying?
I'll be giving my full break down in the Milk Road PRO discord later today.
And if you’re wondering whether it’s actually worth it, I recently called Micron MU, and it played out almost exactly as expected, up over 11% and at one point pushing past 20%.

You can join us for 14 days for just $1, and see exactly what we’re buying before the rest of the market catches on.
At the very least, you can bully me in Discord.
Alright, that’s it for this edition of Milk Road AI. We want to hear from you.
Are you grabbing the $1 deal?

AI RUNS THE ECONOMY, BUY THIS NOW 🤖
In last Wednesday’s episode, we sat down with Kyle to break down his personal portfolio reveal and the big thesis behind it: AI managing capital onchain, and what that could mean for the assets he’s buying now.
Here’s what you’ll hear:
- Kyle walks through his portfolio as part of Milk Road PRO, including live analyst portfolios, trade alerts, and Discord access.
- Why his two biggest positions are Tesla and Coinbase, with Tesla as full-stack AI and Coinbase as onchain finance rails.
- The next tier of holdings, Bitcoin, NVIDIA, and Apple, framed as scarcity or moat assets tied to compute and distribution.
- Crypto, AI, and macro risk, including staked ETH, Aave, Galaxy Digital, a yield cash buffer (susd), plus oil and geopolitics as a de-risk trigger.
Hit play and see for yourself 👇️
YouTube | Spotify | Apple Podcasts

Hackers are using prompt injections to break into crypto users accounts. Don’t be a victim! Use private, safe and encrypted chat Okara. Use the code MILKROAD to get a 20% discount.
Summ (formerly Crypto Tax Calculator) is a tax software built specifically for crypto. Get started for free with Summ.
Nexo is back in the U.S. - and new clients get 30 days of Wealth Club Premier perks! Higher yields, lower borrowing rates, and crypto cashback - start here.

BITE-SIZED COOKIES FOR THE ROAD 🍪
Elon Musk plans a “Terafab” chip facility with Tesla and SpaceX to meet growing AI demand. He says building chips is necessary as the current supply can’t keep up.
Cursor admitted its new Composer 2 model was partly built on Moonshot AI’s Kimi. The company says most of the training was its own, despite the open-source base.
Jeff Bezos’ Blue Origin is planning a space-based data center with over 50,000 satellites. The goal is to shift AI computing to orbit using solar power and reduce strain on Earth.
Tax season is just around the corner. If you’re not sure how to go about it, SUMM is a tax software built specifically for crypto.

MILKY MEMES 🤣


ROADIE REVIEW OF THE DAY 🥛

















