- AI with Kyle
- Posts
- AI with Kyle Daily Update 149
AI with Kyle Daily Update 149
Today in AI: Pentagon Victory?
What’s happening in the world of AI:
Highlights
Anthropic Folds: Safety Policy Gutted a Day Before Pentagon Deadline?
With the Pentagon's Friday deadline looming, Anthropic has rewritten its Responsible Scaling Policy. The headlines are saying they've ditched their core safety promise. The reality is slightly more nuanced, but the timing is damning.
On Tuesday, Anthropic published version three of their RSP. The key change: they've removed their commitment to pause training more powerful models if capabilities outstripped their ability to control them. That was kinda the whole point of the original policy…
They've also separated their own safety commitments from their recommendations for the industry, meaning they'll tell everyone else to do one thing while doing something different themselves. They'll likely hold themselves to a higher standard - and they are admitting that they know everyone else will do what the hell they want!
Anthropic's spokesperson told CNN this is unrelated to the Pentagon standoff and is about competition. Nah. I mean…come on.
This dropped the day after Defence Secretary Pete Hegseth gave Dario Amodei an ultimatum to roll back safeguards or lose the $200 million contract and get slapped with a supply chain risk designation. The timing is not coincidental, let’s be real.
To be fair to Anthropic, their argument about pausing has always had a structural problem. If Anthropic pauses and nobody else does, OpenAI and xAI just carry on and Anthropic becomes irrelevant. If all American labs pause, Beijing carries on. It only works with global unilateral agreement, which isn't happening. We humans are pretty bad at doing anything unilaterally…
So they're right that the policy was impractical. But announcing this the day before the Pentagon deadline strips away any pretence that this is about anything other than survival.
Kyle's take: This is a sad week for AI safety. For two years, Anthropic were the good guys. The company with a soul. The ones who hired a full-time philosopher (Amanda Askell, brilliant woman, lovely Scottish accent, hasn't tweeted all week and I don't blame her…). Now they've gone from poster child of responsible AI to stomping on solo developers, publishing questionable distillation attack claims, and gutting their safety policy under Pentagon pressure, all in a month.
It’s been a weird 2026 so far for them!
That said, I've read through the actual RSP v3 document and I'm not seeing anything as extreme as the headlines suggest. There's no concrete statement saying "we're going to be less safe." It reads more like they're laying the foundations for flexibility, which is probably the point. Build in the wiggle room now, use it later. The new policy commits to publishing detailed safety roadmaps and risk reports, which is fine, but publishing reports about plans is several layers removed from actually stopping.
The bigger question nobody is asking: what about the other three? Google, OpenAI, and xAI all took the same contracts. We've heard nothing from them. No public fights, no red lines, no pushback. That silence suggests lot about what they've already agreed to…
Source: CNN coverage | Business Insider | Anthropic RSP v3 announcement | Full RSP v3 document (PDF) | Anthropic tweet
Claude Gets Scheduled Tasks: Open Claw, Piece by Piece
Two days ago, Anthropic added remote control to Claude Code so you could pick up sessions from your phone. Now they've added scheduled tasks. Claude can complete recurring tasks at specific times automatically: morning briefings, weekly spreadsheet updates, Friday team presentations etc. etc.
This is significant because about 80-90% of what people actually used Open Claw for was exactly this…. Set up a task overnight, let it research and compile, deliver a daily report in the morning. Basic personal assistant stuff.
What about ChatGPT? They’ve had scheduled tasks for a while right? The difference between this and ChatGPT's scheduled tasks (which launched months ago and nobody cared about) is the harness.
ChatGPT's chatbot isn't connected to much. It sends you a report. Saves you a few seconds versus just asking for one. Claude Code is connected to everything via MCP, skills, and APIs. A Claude Code scheduled task can spend hours crunching analytics, pulling data from multiple sources, building dashboards, and delivering the result. That's genuinely useful.
Kyle's take: If I was a betting man I'd say that Anthropic have looked at OpenClaw, worked out what parts people actually want, and they're rebuilding it inside Claude's ecosystem piece by piece. Remote control. Scheduled tasks. Next will be more integrations? Maybe a way to cloud host?
It's a safer, more constrained version. You (probably!) can't accidentally delete your inbox or buy things on your credit card because you gave it too much access. The limitation is the safety. That's always the trade-off: control versus risk. For most people, this is the better option.
We're also seeing a bigger shift here. 2025 was about talking to chatbots, having conversations. 2026 is about AI doing work. The model matters less than the harness you connect it to. Gemini 3.1 Pro is brilliant, but without the right harness it's less useful than Claude Code with Opus 4.6 because Claude Code is connected to the tools you use every day. The model is the engine. The harness is the car.
The Accidental Hacker: One Man, Claude Code, and 7,000 Robot Vacuums
A man accidentally gained control of 7,000 DJI robot vacuums and could see through the cameras inside people's homes. He did this using Claude Code. Not as a deliberate hack...he was trying to control his hoover with a game controller for shits and giggles.
Kyle's take: This is a bit of a silly story. BUT it is exactly the kind of thing that makes the Pentagon story more complicated. Claude's tools are the most powerful available and you can do enormous good and enormous harm with them. The same Claude Code that helps me build websites and write books can be used to accidentally (or deliberately) compromise thousands of home devices. Or the Mexican government's data archives (this also just occurred…)
Anthropic can put up all the safety policies they want, but once the tools are in people's hands, the guardrails only go so far. Doesn't mean we shouldn't have them. Just means we should be realistic about what they can actually prevent.
Source: The Guardian
Tool Corner: What to Use and When
A few practical recommendations that came up during Q&A. I though I’d give my rundown!
Best free vibe coding option right now: Google Antigravity. Individual plan is £0/month. Yay! Gives you access to Gemini 3.1 Flash, Claude Sonnet and Opus 4.6, and GPT-O-SS 120B (the open source ChatGPT). Hard to beat free. There's talk of it merging with Google AI Studio, which would give it a vibe coding interface similar to Lovable. That would be a massive unlock for non-technical users.
Best for getting real work done: Claude Code on the $100-200/month plan. Not just for coding despite the name. I use it for structuring books, research, task management, everything. The £20/month plan hits limits fas unfortunately.
Best Swiss Army knife: Still ChatGPT. Images, video, voice assistant, more bells and whistles. Better for consumers. Claude is better for professional work, which is why ChatGPT has 900 million users and Claude has about 30-40 million.
Best for research: Claude with extended thinking on Opus 4.6. Used to be Perplexity, but Claude's research mode now gives the best results.
Don't sleep on: The Codex app (Mac only, free on ChatGPT plan, double credits this month). Two weeks old and already rivalling Claude Code for many tasks. No Windows version yet, much to everyone's annoyance.
Running local models: LM Studio lets you download and run models locally. But you'll need serious hardware. Most laptops won't cut it, which is why people are buying Mac Mini Pros. A rack of eight H200 GPUs runs about a quarter million. Most people should just use the API. (Update: they just added the ability to access your local LLM from other devices, like your phone…VERY cool).
Webinar reminder: Kyle is running a free live webinar on giving AI workshops to businesses. 6pm UK time, with repeats next Tuesday and Thursday. Sign up at aiwithkyle.com/webinar. Recording sent to all registrants.
Streaming on YouTube (with full 4k screen share) and TikTok (follow and turn on notifications for Live Notification).
