GM. This is Milk Road AI, where we track the capital, compute, and companies driving AI.
Here’s what we’ve got for you today:
- ✍️ The only AI question that matters.
- 🎙️ The Milk Road AI Show: By 2035, Most Grand Challenges Facing Humanity Will Be Solved w/ Dr. Alex Wissner-Gross.
- 🍪 Google fires back with next-gen TPUs.
Consensus Miami is one of the largest digital asset conferences that’s going all in on crypto and agentic commerce. Grab your passes at 20% off.

Prices as of 10:00 a.m. ET.

MODELS VS INFRASTRUCTURE: PICK YOUR SIDE
In 1869, the Suez Canal opened and quickly became the most important piece of infrastructure on Earth.
It didn’t matter what was being shipped, who built the cargo, or where it was going.
If you wanted to move goods between Europe and Asia without sailing around the entire continent of Africa, you paid the toll.
The canal’s owners didn’t need to be in shipping, trade, or manufacturing.
They just needed to own the shortest path between two points that the entire world needed to reach.
Every ship added profit, every route increased its power, and every workaround failed the math.
Every AI model today has the same problem a cargo ship had in 1869: it needs to get from training to deployment, and the shortest path runs through someone else’s infrastructure.
Amazon and Google just finished building the canal.

This month, Amazon committed up to $33B into Anthropic.
They get the $5B immediately, $20B more on the way, and $8B already in the water before anyone was paying attention.
Sounds like a massive bet on a single AI company but it's not.
Because buried in the fine print is the part that actually matters, in exchange for all that cash, Anthropic committed to spend $100B on AWS over the next decade.
Amazon writes the check, and Anthropic hands it straight back, plus interest.
And Amazon isn't alone, Google ran the exact same play and went even bigger.
Google confirmed a deal to invest up to $40B in Anthropic, $10B went in immediately, with another $30B to follow contingent on performance milestones.
For context on how serious Google is about this relationship, before this week, Google's total investment in Anthropic was about $3B, and they owned roughly a 14% stake.
They just decided that wasn't enough.
The deal isn't just about cash but rather includes a significant expansion of Anthropic's computing capacity on top of the existing agreement giving Anthropic access to up to 1M Google TPU chips, worth tens of billions more.
In exchange? Anthropic trains its model on Google's hardware and the timing is not a coincidence.
Google's $40B announcement came a week after Amazon's $5B drop.
Two of the three largest companies on earth, in the same week, wrote the biggest checks of their lives to the same AI lab.
So now Anthropic is simultaneously Amazon's best customer and Google's best customer.
Both are competitors and investors, while also serving as vendors to the same lab.
And both collect the toll regardless of who wins the AI race.
The only entity in this entire arrangement with any real leverage is Anthropic itself because it's the only one both sides are desperately competing to keep.
Claude is the only frontier model available across AWS, Google Cloud, and Microsoft Azure simultaneously.
Anthropic played the two biggest clouds in the world off each other and got both of them to build the canal while signing long-term shipping contracts.
That is not a bad negotiation.
Why are they both willing to write the check
Here's the number that makes the whole thing make sense.
Anthropic's revenue increased from $1B to $30B in 15 months, marking the fastest revenue ramp in the history of enterprise software.

To put that in perspective: it took Salesforce 10 years to hit $1B in revenue.
Snowflake took 7, even Slack, the darling of hypergrowth SaaS, needed nearly 4 years to cross that line.
Anthropic added $29B in incremental revenue in the time it takes most startups to close their Series B and hire a head of marketing.
Over 1,000 enterprise customers now spend more than $1M a year on Claude and that number doubled in under two months.
Claude Code alone is sitting at a $2.5B annual run rate and now touches roughly 4% of all GitHub commits on the planet.
So when Amazon looks at that trajectory and writes a $5B check, they aren't gambling on a startup with a prayer and a good deck.
They're signing a long-term lease with a tenant who just proved they can fill the building, pay rent on time, and probably need a bigger building next year.
The equity stake is the handshake, and the $100B compute commitment is the actual deal.
The shift that makes the canal irreplaceable
Here is the part the market hasn't fully priced in yet, and the reason the canal only gets more valuable from here.
Right now, training a model, the upfront cost of teaching an AI everything it knows accounts for about 35% of all AI compute.
Running the model, every time a user asks it something, accounts for the other 65%, and that ratio is only going one direction.
By 2026, two-thirds of all AI compute spending will be inference, the act of actually using the AI. By 2030, that market alone will hit $255B.

And here's the kicker: reasoning models, the new generation of AI that actually thinks through problems step by step before answering, consume up to roughly 150 times more compute per answer than a standard chatbot.
So the smarter AI gets, the more expensive each answer becomes, and the more infrastructure is required to deliver it.
The more capable the model, the more ships need to pass through the canal.
The combined 2026 AI infrastructure spend from just five U.S. companies is projected at $685B.

Google alone is spending roughly $180B this year, and that is a 97% year-over-year increase from a base that was already staggering.
They are not building this out of ambition or ego or fear of missing out.
They are building it because every dollar spent making AI smarter creates three more dollars of demand for the infrastructure required to run it.
The canal gets more valuable the more cargo there is to ship, and right now, the cargo is compounding.
THE ENTIRE INDUSTRY IS HEADING TO MIAMI
Where do you go to hear people talk about crypto, AI, and real capital?
If you surface-level takes, then it’s probably in my group chat.
But if you want institutions with deep pockets, you’ll have to head to Consensus Miami.
Consensus Miami is one of the largest digital asset conferences that’s going all in on crypto and agentic commerce.
Here are the key details:
- 20,000+ global attendees
- $4T AUM managed by finance giants attending Consensus Miami
- The ultimate intersection of crypto and AI
The best part?
You can get an exclusive 20% discount on passes with code MILKROAD.

MODELS VS INFRASTRUCTURE: PICK YOUR SIDE (P2)
Now here's where it gets interesting.
Everyone is screaming about which AI model is smarter, GPT-5 vs Claude, open source vs closed, benchmarks, leaderboards, researcher poaching, the whole circus.
But the hyperscalers have figured out something the rest of the market hasn't caught up to, it doesn't matter.
When GPT-5, Claude, and Gemini are all within striking distance of each other on the benchmarks that actually move enterprise purchasing decisions, and they are the model layer, the moat stops being the model layer.
The infrastructure layer becomes the moat.
Think about what happened in the original Suez Crisis of 1956, when Egypt nationalized the canal, and the world briefly lost its mind.
Ships didn't stop moving, economies didn't collapse, and trade didn't die.
The ships just had nowhere else to go that wasn't catastrophically more expensive, so they paid.
And the lesson every government and shipping company took away from that moment was simple: whoever owns the canal owns the leverage.
That is the position Amazon and Google are building toward.
Just the only route that makes economic sense for everyone who needs to get somewhere.
Claude is the only frontier model available across AWS, Google Cloud, and Microsoft Azure simultaneously.
Every enterprise deployment, every inference call, every token generated, all run through somebody's data center.
And right now, those data centers belong to the same people who funded the lab that built the model.
The loop is closed, and the canal is theirs.
Elon’s $60B AI move
While Amazon and Google were quietly locking in the infrastructure layer, Elon Musk made a very loud move in the same direction.
SpaceX announced a deal giving them the option to acquire Cursor, the AI coding tool that went from zero to $2B in annual revenue in under three years, for $60B.

If they don't pull the trigger on the full acquisition, they pay $10B just for the collaboration.
To understand why, you have to understand what Elon is actually assembling.
He has the model (xAI, Grok 4, which leads every major reasoning benchmark).
He has the compute (Colossus, 200,000 Nvidia GPUs humming in Memphis, scaling to 1M).
He has the distribution (X, 600M users and a real-time data pipeline that trains on everything posted every second).
He has the hardware layer (Tesla, the Digital Optimus project, the physical AI robots that need a brain).
The one missing piece was the product that puts all of it in developers' hands every single day.
The thing that makes engineers open their laptops in the morning and immediately reach for Elon's stack instead of someone else's.
Cursor is that product.

Over half the Fortune 500 already use it, and 1M developers open it daily.
And it autonomously edits multiple files simultaneously, understands entire codebases, and proposes changes that span entire projects.
Enterprises are reporting 40 to 60% productivity gains, which is the kind of ROI that makes procurement sign off before the demo is finished.
But here's the part that makes this deal make sense for both sides: Cursor wants this just as badly as Elon does.
Because Cursor has a problem, a good problem, but a real one.
They are growing faster than they can serve, and the product is capacity-constrained, every new enterprise customer, every new developer, every new codebase they take on requires more compute than they currently have reliable access to.
You can have the best coding tool on the planet and still lose deals because your infrastructure can't keep up with demand.
Elon is sitting on the largest private GPU cluster in the world.
So this isn't just an acquisition but rather a supply deal disguised as a buyout.
Cursor gets unlimited compute headroom to scale as aggressively as the market will let them.
SpaceX gets the developer product it was missing, and Elon gets a distribution layer pointing millions of engineers directly at Colossus every single day.
The honest verdict
Here’s the thing nobody wants to admit in a market obsessed with model releases and benchmark leaderboards.
The AI model race is real, and the competition is genuinely fierce.
The benchmarks, the hires, the product launches, the drama, all of it matters, all of it is interesting, and none of it changes the underlying economic structure.
Because the model race and the infrastructure race are two completely different games.
And right now, the market is pricing in the model race while quietly undervaluing the infrastructure one.
The labs need capital and compute to stay at the frontier, and the clouds need AI workloads to justify the CapEx.
The chip makers need volume commitments to fund the next silicon generation, and every actor in this system is rationally optimizing within a loop that benefits all of them simultaneously.
And at the center of that loop, collecting a toll on every transaction that passes through, sit Amazon and Google.
They don't need a horse in the race, they own the track.
In 1869, it took about a decade after the Suez Canal opened for the full weight of its strategic importance to sink into the history books.
The initial headlines were about the engineering miracle, the ceremonial opening, and the dignitaries in attendance.
The part about permanent geopolitical leverage took longer to appreciate.
The AI infrastructure deals being signed right now in April 2026 are going to read exactly the same way in hindsight.
The canal is built, the ships are already lining up, and the tolls are already being collected.
The only question left is whether you own a piece of the waterway or whether you're just another ship paying to pass through.
Alright, that's it for this edition of Milk Road AI. We want to hear from you.
So where are you placing your bet, on the models or the infrastructure behind them?
- The models, GPT, Claude, and Gemini win the mindshare.
- The infrastructure, AWS, Google Cloud, chips, and compute.
- Both own the apps and the rails.

BY 2035, MOST GRAND CHALLENGES WILL BE SOLVED 🚀
In today's episode, we sat down with Dr. Alex Wissner-Gross, co-author of SolveEverything(.)org, to talk about how AI is shifting from boutique problem-solving to industrialized "bulk solving" of entire scientific disciplines.
Here's what you'll hear:
- What it actually means for a field to be "solved," and the L0 to L5 ladder tracking how disciplines get industrialized.
- Why math is the canary, and how the same playbook is set to hit physics, chemistry, materials, and biology next.
- The 2026 to 2035 timeline for moonshots like regenerative medicine, synthetic food, BCIs, and interspecies communication.
- Where investors should focus now: compute infrastructure, frontier labs, robotics benchmarks, and RealFi tokenization.
Hit play and don't miss this one 👇️
YouTube | Spotify | Apple Podcasts

Real Finance Blockchain is an EVM-compatible L1 that is built specifically for RWA tokenization. Read more about Real Finance Blockchain here.
Nexo is back in the U.S. - and new clients get 30 days of Wealth Club Premier perks! Higher yields, lower borrowing rates, and crypto cashback - start here.
Midnight is a fourth generation blockchain that just launched. Check out their launch announcement here.

BITE-SIZED COOKIES FOR THE ROAD 🍪
OpenAI drops GPT-5.5, pushing ChatGPT closer to being an AI superapp. Six weeks after GPT-5.4, the pace shows frontier labs are in a full sprint.
Meta signs a multibillion-dollar deal for millions of Amazon Graviton chips. CPUs are quietly becoming the backbone of agentic AI, not just GPUs.
Google Cloud unveils 8th-gen TPUs that train AI 3x faster than before. It can now link 1M+ chips in one cluster, a direct shot at Nvidia.

MILKY MEMES 🤣


ROADIE REVIEW OF THE DAY 🥛

















