- AI with Kyle
- Posts
- AI with Kyle Daily Update 151
AI with Kyle Daily Update 151
Today in AI: Trump gets mad
What’s happening in the world of AI:
Highlights
Trump Bans Anthropic From The Federal Government
Donald Trump has personally intervened in the Anthropic standoff. In a Truth Social post written in his signature all-caps style, he called Anthropic a "radical left woke company" and directed every federal agency to "immediately cease all use of Anthropic's technology."

Pete Hegseth (a name I’ve managed to mispronounce in every variation on my livestream…) followed up within two hours by officially designating Anthropic a supply chain risk, extending the ban beyond just the Department of War to any contractor or supplier that does commercial activity with Anthropic. This designation has never been applied to a domestic American company before. It's typically reserved for Chinese and Russian firms like Huawei.
But there's the contradiction buried in Trump's own post. He says "immediately cease" in one sentence. Then in the next, he gives a six-month phase-out period. These two things do not go together. This looks a lot like TikTok: a big dramatic ban announcement with a built-in get-out clause that buys time for a deal.
Honestly: not a bad move. Give this time to cool down and to be sorted out in private not public.
The proof that this is more theatre than policy came within 48 hours. The US military reportedly used Claude in the Iran strikes that began on Saturday, just hours after Trump's ban. According to the Wall Street Journal and Axios, US Military Command used Claude for intelligence purposes, target selection, and battlefield simulations during the joint US-Israel bombardment. You don't rip out a piece of software hours before a military operation. It's simply not feasible.
Kyle's take: An AI company just went toe to toe with the President of the United States and the Department of War and hasn't been destroyed.
That's unprecedented.
That tells you everything about how important these companies have become. The six-month phase-out is the wiggle room. I think this plays out like TikTok: the deadline keeps getting pushed until a deal gets done.
One critical point from Ethan Mollick that people need to understand: the government does not have access to better AI models than you.
In fact they are probably using worse ones because new models require additional vetting and procurement. There is no secret AGI sitting in the Pentagon. They're using the same Opus 4.6 or possibly 4.5 that you get for $20 a month. When they used Claude for target selection in Iran, they were using the same tool you use to draft emails. That should make you think.
Source: Anthropic's statement | Mashable coverage | The Guardian on Iran strikes | BBC coverage | Hegseth's tweet
OpenAI Swoops In: "Hold My Beer"
Literally hours after Trump's Anthropic ban, OpenAI published a statement announcing they'd reached an agreement with the Department of War for deploying AI in classified environments. The timing was not subtle.
Their statement claims three red lines, one more than Anthropic had. No mass domestic surveillance. No directing autonomous weapon systems. No high-stakes automated decisions like social credit systems (the fact they even mentioned social credit is fascinating and slightly alarming…that wasn’t even on the table before!).

The key differences from Anthropic's original deal: OpenAI insists on cloud-only deployment, meaning the Department of War can't tinker with the models locally. Or put edge-versions onto autonomous weapons. They keep their safety stack. Cleared OpenAI personnel stay in the loop. And they've published actual contract language. Full details here.
But look at the wording carefully. The contract says AI systems won't be used to direct autonomous weapons "in any case where law, regulation, and department policy require human control." The Department of War sets its own policy. And Trump can get the other parts changed if he wants to… Change the policy, and suddenly autonomous weapons are permitted under the same contract. The surveillance protections reference the Fourth Amendment and specific security acts, which is stronger. But the autonomous weapons clause has significant wiggle room.
OpenAI's FAQ asked directly: why could you reach a deal when Anthropic could not? Their answer: they don't know! They speculate their contract provides better guarantees. They also asked the government to resolve things with Anthropic and offer the same terms to all labs.
Kyle's take: Credit to Sam Altman for publicly supporting Dario Amodei, because these two do not like each other. They literally refused to hold hands at the Delhi AI summit while every other leader on stage linked up.
Why buddies?
Sam needs Anthropic to survive. If Anthropic goes down, it could pop the entire AI investment bubble. Both companies run on external investment. Both are burning cash. Confidence in AI as an investment category matters more than which company gets the Pentagon contract. It’s better that money keeps flowing into the industry even if Dario is getting some of it!
Do I think OpenAI's deal is genuinely safer? Maybe on paper. But the contract language around autonomous weapons gives them room to comply with future policy changes that remove human control requirements. Anthropic's approach was blunter. OpenAI's approach is lawyerly: say the right things, build in flexibility. Which one actually protects more people? I honestly don't know.
Source: OpenAI statement | OpenAI tweet
Claude Hits #1 on the App Store as #CancelChatGPT Trends
In the fallout from Trump's ban and OpenAI's Pentagon deal, a Cancel ChatGPT movement has taken off. People are leaving ChatGPT for Claude in protest of OpenAI's eagerness to fill Anthropic's shoes.
The result: Claude hit number one on the US App Store for the first time ever. The top five apps in the US are now three AI chatbots, Temu, and Free Cash. As one commenter noted, that feels like a recession indicator.
In the UK, Claude reached number three. Katy Perry signed up and tweeted about it. After the Super Bowl advert a month ago where the main takeaway from market research was confusion because nobody knew what Claude was, Anthropic is suddenly a household name. Not because of marketing. Because of politics!
The irony is thick. Claude has never been a consumer product. It does text and code. That's it. No image generation, no video, no proper voice assistant, no app store. ChatGPT has 900 million users approaching a billion. Claude has tens of millions. But suddenly, being the company that told Trump to shove it is the best marketing they've ever had.
The problem: Claude went down during the surge! Login paths broke. Of all the days for an outage, this is the worst possible one. Millions of potential new users are downloading Claude for the first time, and their first experience is it not working. That's damaging. The API stayed up, which suggests it's the consumer infrastructure buckling under unprecedented demand. They've never had to handle this kind of traffic before. A good problem to have sure but rough onboarding for new users!
Kyle's take: This is Claude's moment and they need to not blow it. The goodwill from standing up to the government is enormous, but goodwill evaporates fast if new users can't log in. There's also a real question about whether AWS (Amazon) will continue hosting them given the supply chain risk designation. Amazon has a massive investment in Anthropic but also massive government contracts to protect. That tension is going to be uncomfortable.
Switching to Claude? Here's How to Bring Your Memories
If you're making the move from ChatGPT to Claude, the biggest barrier has always been memory. If you've used ChatGPT for a year or more, it knows your businesses, your preferences, your communication style. Starting fresh on Claude means losing all of that context and getting worse results until it learns you again.
Claude has built a memory import tool specifically for this moment. Go to claude.com/import-memory, export your ChatGPT memories, and import them into Claude's persistent memory system. Claude launched auto memory just last week, so the timing feels deliberate. Did they see this wave coming?
For those completely new to Claude, Anthropic has free learning courses at anthropic.skilljar.com. Claude 101, AI fluency, Claude Code, MCP, courses for educators and students. You get a certificate of completion. All free.
Kyle's take: Persistent memory is the stickiness factor. It's the only reason I kept using ChatGPT alongside Claude for so long. Now that Claude has auto memory and a migration tool, the last barrier is gone. If you're thinking about switching, this is the easiest it's ever been. Just be aware Claude might be struggling under the load for the next few days.
Source: Claude memory import | Anthropic courses
Streaming on YouTube (with full 4k screen share) and TikTok (follow and turn on notifications for Live Notification).

