GM. This is Milk Road AI, the place where we explain what happens when AI stops being a tech story and starts becoming a national security one.
Here’s what we’ve got for you today:
- ✍️ The AI arms race just took a dark turn.
- 🎙️ The Milk Road AI Show: The Industry AI Is Disrupting the Most (And Nobody Is Talking About It) w/ Kyle Reidhead.
- 🍪 MacBook prices jump as AI devours memory.
Okara’s Reddit agent finds high-intent threads 24/7 and writes on-brand comments that convert. Grow your business with Okara's Reddit Agent.

Prices as of 10:00 a.m. ET.

THE AI WAR JUST STARTED IN WASHINGTON
In 2016, the FBI had a problem.
They had the iPhone of a dead terrorist, but they couldn’t unlock it.
So they did what any powerful federal agency does in a crisis.
They sent an email to Apple basically saying, “Hey, quick favor?”. And Apple said no.

The FBI escalated, Congress got involved, the president weighed in, and the entire nation picked a side.
Tim Cook called it the software equivalent of cancer, and the FBI called it obstruction.
And the American public watched the biggest tech-versus-government showdown in a generation play out on live television.
(Spoiler: The FBI eventually found a workaround and dropped the case. The cancer was benign.)
Now, fast-forward to today, and history is rhyming.
But this time the stakes are nuclear. Literally.
The Pentagon just blacklisted Anthropic, the company behind Claude, labeling it a supply chain risk to national security.
The crime? Refusing to remove two ethical guardrails from its military contract:
- Guardrail #1: No mass surveillance of American citizens.
- Guardrail #2: No fully autonomous weapons that can kill without a human pressing the button.

That’s it, and that’s what started a war between Silicon Valley and the Department of Defense.
The backstory
Anthropic wasn’t some anti-military protest company burning flags in the parking lot.
They were the Pentagon’s favorite AI partner.
In late 2024, Claude became the first frontier AI model ever deployed inside U.S. classified military systems, running on Palantir’s secret-level platform.
By July 2025, the Pentagon signed a contract worth up to $200M to use Claude across classified networks for intelligence analysis, operational planning, cyber operations, and more.
The deal came with Anthropic’s acceptable use policy, which the Pentagon agreed to at signing.
Two restrictions: no mass surveillance of Americans, no autonomous weapons.
Simple enough, everyone signed and shook hands.
Then things got very messy, very fast.
In January, U.S. special forces conducted a raid in Venezuela that captured former president Nicolás Maduro.
Dozens were killed, and reports surfaced that U.S. forces used Claude during the operation via Palantir’s systems.
When an Anthropic executive reached out to Palantir to ask about Claude’s role in the raid, the Pentagon took it as an act of war.

A senior administration official told Axios, we are going to make sure they pay a price for forcing our hand like this.
Translation: How dare a private company ask questions about how we’re using their technology to conduct lethal operations.
The Pentagon’s position is that a private contractor should not have anything resembling veto power over military operations.
Anthropic’s position is that we literally told you the rules before you signed the contract.
The ultimatum
Last week, the Pentagon demanded Anthropic renegotiate to allow the military to use Claude for all lawful purposes without limitation.
They argue that mass surveillance and autonomous weapons are already prohibited by law and internal policy, so a private contractor shouldn’t pile on extra restrictions.
Defense Secretary Pete Hegseth called AI a wartime arms race and dropped a line so aggressive it belongs on a movie poster:
We will not employ AI models that won’t allow you to fight wars, but Anthropic held firm.
CEO Dario Amodei published a statement that read like a man with zero left to lose:
“These threats do not change our position: we cannot in good conscience accede to their request.”

He argued that AI-powered mass surveillance creates serious, novel risks to our fundamental liberties and that frontier AI systems simply aren’t reliable enough for fully autonomous weapons.
He offered to work with the Pentagon on R&D to improve reliability, but the Pentagon declined.
On February 27 at 5:01 PM, the moment the deadline expired, Hegseth designated Anthropic a supply chain risk to national security.
President Trump simultaneously ordered all federal agencies to stop using Anthropic technology, calling the company Radical Left AI and left-wing nut jobs.
(Because nothing says left-wing nut job like not wanting autonomous killer robots. Sure.)
Here’s the kicker: this designation had historically been reserved for foreign adversaries like Chinese tech firms.
It has never been used against an American company, and legal analysts immediately spotted the absurdity.
The government was simultaneously arguing that Claude is so vital it can’t tolerate any restrictions on it, yet so dangerous it must be purged from the entire defense supply chain.
Pick a lane for Christ's sake.
HOW TO GET YOUR FIRST 100 PAID USERS ON REDDIT
Most crypto projects grow their customer base on X.
But one of the most underutilized growth channels is Reddit.
That’s exactly the type of problem the Okara Reddit Agent is built to solve.
Here’s what you have to do:
- Go to the Okara Reddit Agent dashboard.
- Drop in your product URL and website.
- Add the keywords and topics your ideal customers care about.
The Okara Reddit Agent then works 24/7 to:
- Monitor thousands of communities
- Find threads where your product is the perfect fit
- Draft natural replies that you can post in one click
Grow your business with Okara's Reddit Agent.

THE AI WAR JUST STARTED IN WASHINGTON (P2)
Within hours of Anthropic’s deadline expiring, OpenAI CEO Sam Altman announced his own Pentagon deal to deploy AI on classified networks.
The timing was suspicious enough for Altman himself to address it:
It was definitely rushed, and the optics don’t look good.
(When the CEO admits the optics are bad, the optics are catastrophic.)
OpenAI’s deal accepted the Pentagon’s all lawful purposes language, which is exactly the clause Anthropic rejected.
But here is where it gets interesting.
OpenAI claims its technical architecture provides stronger protection than contract language.
They argue that by keeping everything cloud-based, the military physically can’t plug their models directly into weapons systems, sensors, or operational hardware.
Altman even endorsed Anthropic’s position publicly, saying AI shouldn’t be used for domestic surveillance or autonomous weapons.
Then he admitted that we shouldn’t have rushed to get this out on Friday, and the issues are super complex.
By Monday, OpenAI was already amending the deal to add explicit language prohibiting domestic surveillance of Americans.
It was the corporate equivalent of running into a burning building to look like a hero, then immediately calling the fire department because the building was, in fact, on fire.
And this is where it gets really juicy for investors.
Users didn’t just have opinions; they voted with their thumbs.
Claude shot to #1 on the App Store. It had been sitting at #42 before the Super Bowl.

I’m still trying to figure out why Dick's Sporting Goods is sitting at #3. I can’t remember the last time I walked into one of those stores
Anyways, weekly U.S. downloads surged to 20x their January levels.
Daily sign-ups quadrupled, breaking records every single day of the last week of February.
Paid subscribers more than doubled year-to-date, and Anthropic actually reported outages from unprecedented demand.
Meanwhile, ChatGPT got absolutely wrecked, and U.S. uninstalls surged 295% in a single day.
One-star reviews spiked 775% on Saturday and grew another 100% on Sunday, and five-star reviews dropped 50%.
And it wasn’t just users.
Over 300 Google employees and 60+ OpenAI employees signed a joint petition titled “We Will Not Be Divided”, accusing the Pentagon of trying to split the tech industry with fear.
By Monday, 900 people had signed.
Over 100 Google AI engineers separately wrote an internal letter to Jeff Dean, head of Google DeepMind, urging the company to align with Anthropic’s position.
The investor angle
Now let’s talk money, because that’s why you’re here.
Despite the consumer revolt, the structural gap between the two companies is still massive.
ChatGPT has 900M weekly active users, ~60% U.S. market share, and roughly 50M paid subscribers.
Meanwhile, Claude is at ~30M monthly active users, ~3.5% U.S. market share, and paid subs doubling but undisclosed.
It might look like ChatGPT is way ahead, but that’s not quite the full story.
Anthropic is destroying the enterprise market.
Their share jumped from 24% to 40% in a single year. Over 500 customers are spending $1M+ annually.

Claude Code alone is generating $2.5B in annualized revenue.
Total revenue run rate is at $14B and has grown 10x in the past three years, and they are currently valued at $380B.

OpenAI is now valued at $730B after raising $110B from Amazon, SoftBank, and Nvidia.
Now here’s the thing Wall Street needs to watch: Anthropic just proved that ethics can be a growth engine.
Within days of the controversy, they launched a memory import tool letting users transfer their entire ChatGPT conversation history directly to Claude.
They extended memory features to free users for the first time.
They had literally mocked OpenAI’s decision to put ads in ChatGPT during a Super Bowl commercial just weeks earlier.
But sustainability is the real question.
Some of Claude’s growth might be a political sugar rush rather than genuine stickiness.
The final verdict
This is one of the most important stories in AI since ChatGPT launched.
Not because of the drama (though the drama is incredible).
Because it proves that frontier AI models have become critical defense infrastructure, on par with semiconductors and satellite networks.
Claude was the only frontier AI model on the Pentagon’s classified networks.
Removing it requires a six-month transition that will disrupt intelligence analysis, operational planning, and cyber operations across multiple agencies.
And the irony is thick enough to cut with a knife:
The Pentagon just blacklisted the company that has been the most aggressive at cutting off Chinese military-linked companies from using its technology.
Dario Amodei himself revealed that Anthropic gave up several hundred million dollars in revenue to block firms linked to the Chinese Communist Party and shut down CCP-sponsored cyberattacks.
The Pentagon’s reward for that loyalty? A blacklist and a tweet calling them left-wing nut jobs.
For investors, the play is nuanced.
Anthropic’s $380B valuation and $14B ARR make it one of the three most valuable private companies on Earth alongside SpaceX and OpenAI.
The supply chain designation adds regulatory uncertainty to its anticipated IPO, but if the courts overturn it (and legal analysts think they will), Anthropic walks away with both the contract and the brand halo.
OpenAI’s $830B target now includes a massive government revenue component as defense spending approaches $1.5T.
But consumer backlash is real, and 295% uninstall surges don’t just vanish overnight.
Palantir is the sleeper casualty here.
Their most sensitive military operations run on Claude, and they’ll need to negotiate with a rival provider.
Which is funny timing because I just dropped a pretty spicy Palantir report last week.
Now the question isn’t which AI is smarter, it’s who controls it and on what terms.
And right now, the market is telling you it cares about both.
That’s it for today. Now we turn it over to the jury (that’s you).
Who’s on the right side of history?
- Anthropic: Principles over Pentagon money.
- OpenAI: Someone has to work with the military.
- Neither: I’m moving to a cabin in the woods before the autonomous robots find me.

AI'S BIGGEST DISRUPTION: MORTGAGES 🏠
In today’s episode, we sat down with Kyle Reidhead to unpack the market flashpoint behind Block’s March 4, 2026 layoff news and what it really says about AI. From there, we zoom out into a practical framework for investing in AI, with a big focus on finance, fintech, and mortgages.
Here's what you'll hear:
- Block cut about 40% of staff, and why that headline is not a clean “AI took jobs” story.
- Two paths: invest in AI model builders, or in companies using AI to lift margins and unit economics.
- Front-facing AI features vs AI as a hidden margin lever that automates back-office work.
- Why finance and mortgages, plus pharma R&D, could see outsized gains, and where regulation adds risk.
Hit play and see for yourself 👇️
YouTube | Spotify | Apple Podcasts

Summ (formerly Crypto Tax Calculator) is a tax software built specifically for crypto. Get started for free with Summ.
Mercuryo is the simple way to buy, sell, and spend crypto with low fees (yes, even with Apple Pay). Discover an easier way to use crypto.
Warbux is the easiest way to start trading crypto with zero deposits required. Get funded and start trading today!

BITE-SIZED COOKIES FOR THE ROAD 🍪
OpenAI released GPT-5.3 Instant, reducing the overly empathetic tone users complained about. The update focuses on more natural responses and less “preachy” language.
Anthropic added voice mode to Claude Code, enabling hands-free coding through spoken commands. The feature is rolling out gradually, starting with about 5% of users.
New MacBook Pro models are up to $400 more expensive due to a global RAM shortage. Rising AI demand for memory is pushing hardware prices higher.

MILKY MEMES 🤣


ROADIE REVIEW OF THE DAY 🥛















