- AI with Kyle
- Posts
- AI with Kyle Daily Update 147
AI with Kyle Daily Update 147
Today in AI: Anthropic declare AI war
What’s happening in the world of AI:
Highlights
The Distillation Wars: Espionage or Efficiency?
Anthropic dropped a bombshell yesterday. They're accusing three of the biggest Chinese AI labs of conducting what they call "industrial scale distillation attacks" on Claude.
Deep Seek, Moonshot (makers of Kimi), and MiniMax are all named. The claim: 24,000 fraudulent accounts, 16 million exchanges, and a coordinated campaign to steal Claude's capabilities and strip out safety guardrails.
Big accusations. But when you start picking at the numbers, things get a lot more complicated. I put together a full breakdown of what's happening, what distillation actually is, and why I think Anthropic are pushing a narrative that doesn't quite match the evidence.
What Is Model Distillation?

First, let's make sure everyone understands what distillation means, because it's not some exotic hacking technique. It's actually a completely normal part of how lots of AI models are built.
You have a teacher model, a big powerful one like Claude. You ask it lots of questions and collect the answers. Then you use those prompt-response pairs to train a smaller, faster student model. The student learns the patterns of the teacher. It's like making a condensed revision guide from a massive textbook.
Every major lab does this with their own models. Anthropic distils Claude into smaller versions of Claude. OpenAI distils GPT into smaller versions of GPT. Totally standard practice.
One limitation is that distillation can only extract what's already there. Think of it like panning for gold. You're shaking the sand away and keeping the gold, but you can't create new gold that wasn't in the pan to begin with. A distilled model can never surpass the teacher it was trained on. That's a fundamental ceiling.
Distillation is normal. BUT distilling someone else’s model? That’s where this gets more fishy.
The Accusation
Anthropic are accusing the Chinese labs of carrying out a “distillation attack”. That’s new. I’ve not actually heard the term before honestly!
OpenAI accussed Deepseek of this before - saying that Deepseek R1 was distilled from o1. Anthopric are building on this, saying that it’s getting more overt and aggressive.
Here's what Anthropic claims. Three Chinese labs created over 24,000 fraudulent accounts and generated 16 million exchanges with Claude to extract its capabilities. The breakdown by lab:

DeepSeek allegedly had 150,000 exchanges, accused of stealing reasoning capabilities.
Moonshot (Kimi) had 3.4 million exchanges, accused of extracting tool-use data.
MiniMax had 13 million exchanges, accused of cloning agentic workflows.
Now, Anthropic isn't just calling this a terms of service violation. They're framing it as a national security threat. Their blog post talks about bioweapons, military intelligence, surveillance systems, and the need for coordinated action from "industry players, policy makers, and the broader AI community."
That's a direct appeal to Washington and Congress. It’s directly saying this is the USA vs. China. Not just Anthropic vs. Deepseek.
This is Anthropic saying: give us more money, restrict Chinese models, and help us fight this existential threat. This is why this news is more than it initially seems. It’s the AI Cold War heating up.
The Numbers Don't Add Up
This is where it gets a bit odd. All credit here to Theo for this analysis here:
Heads up Theo is famously anti-Anthropic! He’s been calling them out for a while. That’s important context here - but his analysis is sound.
He basically calls out the numbers Anthropic are throwing around.
Let's start with DeepSeek's 150,000 exchanges. That sounds like a lot if you don't know how these systems work. But one user request does not equal one exchange.

When you send a single prompt to a reasoning model, there's a chain of events behind the scenes. The model receives your prompt, makes a tool call, gets a result back, makes another tool call, reads some files, processes those, and then returns your answer. One prompt can easily generate 10, 20, or even 50 exchanges. If you use Claude Code, you can watch this happening in real time.

So 150,000 exchanges might represent a few tens of thousands of actual prompts. That's not a state-level espionage campaign. Theo from T3 Chat pulled up his own data during his video breakdown and showed that his relatively small chatbot routing service sees about 4 million exchanges per month. A small startup generates roughly 130,000 exchanges in a single day. Anthropic is flagging a volume of traffic equivalent to one day of a small startup's operation and calling it an attack.
There are also perfectly innocent explanations for this traffic. Running a standard benchmarking suite like SWE-bench Verified involves 2,294 tasks. With agentic tool loops averaging about 50 calls per task, a single benchmark run generates around 115,000 exchanges. DeepSeek's entire "attack" volume fits within the margin of a standard internal benchmark test.

And with MiniMax, the situation is even murkier. Minimax makes up the VAST majority of the 16 million exchanges: 13 million out of the 16 million.
BUT…MiniMax actually had Claude available inside their platform. They had a legitimate product integration using Anthropic's models via API. So a large chunk of those 13 million exchanges could simply be real users accessing Claude through MiniMax's platform, which was an authorised arrangement…
Could this still all be malicious? Yes absolutely. But the numbers we are talking about here mean that there are potentially other explanations that don’t require us to jump straight to “bioweapon threat”.
The Great Irony
Regardless of whether the Chinese labs were distilling or not we still have one final irony. It’s hard to ignore. And I say this as a massive Claude fan…
You guys kinda did it first…
Claude, GPT, Gemini, every single frontier model was trained on data scraped from the entire public internet without permission. Books, articles, code, artwork, music. Anthropic recently settled a $1.5 billion lawsuit for pirating 7 million books from shadow libraries like AnnasArchive to train Claude. They face a separate $3 billion suit for torrenting 20,000 songs.

So the US labs scrape the entire public internet without permission to build the teacher model. They pay any fines or settlements required - following the don’t ask for permission, beg for forgiveness model. NYT, Reddit etc. all get their payout. Then the labs declare their model proprietary. Then they label any attempt to learn from their output as a national security risk. Data for me, but not for you.
Yes it’s not exactly the same thing. And this can (and will) be argued until the cows come home. But it’s awkwardly close no?
So What's Actually Going On?
My take on this: Anthropic are not necessarily lying, but they are absolutely layering this up to make it seem much larger than it is. The numbers are underwhelming for what they claim is an industrial-scale espionage operation. This is basically a declaration of war but they didn’t bring conclusive evidence - this feels a little bit like Bush chatting about the Weapons of Mass Destruction…
I think this is a mix of things. They want Washington to take this seriously and send defence money their way. They want to push back against open source models, which are an existential threat to their business model. And they want export controls tightened, which benefits the closed-source American labs.
Distilling your competitor's model is cheeky. But calling it a national security threat and asking Congress to help you fight it? That's a pretty heavy handed response. Especially when you’ve admitted (via settlement) that your own model was built on $1.5 billion worth of stolen books….
Member Question: "How many of the millions of people who don't use AI are potential consumers?"
Kyle's response: Using AI is one thing, but you still need a business model. AI itself is not a business. You come up with something valuable and then leverage AI to make it cheaper to deliver. The opportunity right now is massive. Only about 16% of humanity is using AI chatbots. About 0.3% are paying for it. We're at 2004-level internet adoption. If you make the right moves now, you'll be in a very strong position. But don't kid yourself that "I use AI" is a business plan. You need to solve actual problems for actual people.
Member Question: "Do human skills still have a future? Or are we outsourcing that too?"
Kyle's response: Physical work is safer than desk work. If the majority of your job is done behind a computer screen, it's not looking good. The more resilient roles share certain characteristics: unstructured physical reality (like nursing, where every day is different), high empathy and trust (public speaking, relationship building, leadership), and legal or fiduciary liability (lawyers, accountants). AI can't be held legally responsible yet, so professions where you take on liability for a client are protected for longer. But even safe-feeling industries will see AI eating into specific tasks. Diversify your income. Multiple revenue streams. Don't rely on a single job.
Member Question: "How did Europe miss the window for AI innovation?"
Kyle's response: Europe didn't miss it exactly. A lot of the foundational work came from British and European researchers. Geoffrey Hinton, Demis Hassabis, the whole DeepMind team. But what happens is the talent gets purchased. Google bought DeepMind in 2014 for between $400-650 million. At the time people thought it was insane. Now it looks like one of the best acquisitions in history. Peter Steinberger, creator of Open Claw, is Austrian. He was just hired by OpenAI. Zuckerberg and Altman were personally trying to poach him. The question someone asked that stuck with me: how many European companies approached him? This is a two-horse race between America and China. Europe is out of the game. Anyone on LinkedIn saying otherwise is kidding themselves.
Streaming on YouTube (with full 4k screen share) and TikTok (follow and turn on notifications for Live Notification).
