- AI with Kyle
- Posts
- AI with Kyle Daily Update 070
AI with Kyle Daily Update 070
Today in AI: OpenAI get nasty?
The skinny on what's happening in AI - straight from the previous live session:
Highlights
🚨 OpenAI Served Subpoenas to Critics During California AI Regulation Fight
Nathan Calvin, general counsel of tiny AI nonprofit Encode (3 employees), was served a subpoena at his home by a sheriff's deputy while OpenAI fought against California's SB 1047. OpenAI demanded all his private communications about the bill, claiming Elon Musk was secretly behind all opposition.
One Tuesday night, as my wife and I sat down for dinner, a sheriff’s deputy knocked on the door to serve me a subpoena from OpenAI.
I held back on talking about it because I didn't want to distract from SB 53, but Newsom just signed the bill so... here's what happened:
🧵— Nathan Calvin (@_NathanCalvin)
2:00 PM • Oct 10, 2025
Kyle's take: Hmm…looking a big unpleasant. OpenAI used a (potentially) unrelated lawsuit against Musk as pretext to send sheriff's deputies to people's homes demanding private texts with legislators and former OpenAI employees.
Even OpenAI's own Head of Mission Alignment, Joshua Achiam, tweeted "at what is possibly a risk to my whole career... this doesn't seem great."
At what is possibly a risk to my whole career I will say: this doesn't seem great. Lately I have been describing my role as something like a "public advocate" so I'd be remiss if I didn't share some thoughts for the public on this. Some thoughts in thread...
— Joshua Achiam (@jachiam0)
4:44 PM • Oct 10, 2025
He probably won't be fired this week - the optics are terrible - but I wouldn't be surprised if he's gone once this blows over. A $500 billion titan sending sheriffs to intimidate a 3-person nonprofit during active political proceedings? This is likely to dominate the AI news this week.
Source: Fortune
📊 State of AI Report 2025: China Is Now the Credible #2
The annual State of AI Report dropped with key findings: OpenAI retains a narrow lead, but China's DeepSeek, Qwen, and Kimi have established China as credible number two. Meta has "relinquished the mantle" of open source to China. 44% of US businesses now pay for AI tools (up from 5% in 2023), with average contracts at $500,000.

Here are the top findings direct from the report:
OpenAI retains a narrow lead at the frontier, but competition has intensified as Meta relinquishes the mantle to China’s DeepSeek, Qwen, and Kimi close the gap on reasoning and coding tasks, establishing China as a credible #2.
Reasoning defined the year, as frontier labs combined reinforcement learning, rubric-based rewards, and verifiable reasoning with novel environments to create models that can plan, reflect, self-correct, and work over increasingly long time horizons.
AI is becoming a scientific collaborator, with systems like DeepMind’s Co-Scientist and Stanford’s Virtual Lab autonomously generating, testing, and validating hypotheses. In biology, Profluent’s ProGen3 showed that scaling laws now apply to proteins too.
Structured reasoning entered the physical world through “Chain-of-Action” planning, as embodied AI systems such as AI2’s Molmo-Act and Google’s Gemini Robotics 1.5 began to reason step-by-step before acting.
Commercial traction accelerated sharply. Forty-four percent of U.S. businesses now pay for AI tools (up from 5% in 2023), average contracts reached $530,000, and AI-first startups grew 1.5× faster than peers, according to Ramp and Standard Metrics.
Our inaugural AI Practitioner Survey, with over 1,200 respondents, shows that 95% of professionals now use AI at work or home, 76% pay for AI tools out of pocket, and most report sustained productivity gains, evidence that real adoption has gone mainstream.
The industrial era of AI has begun. Multi-GW data centers like Stargate signal a new wave of compute infrastructure backed by sovereign funds from the U.S., UAE, and China, with power supply emerging as the new constraint.
AI politics hardened further. The U.S. leaned into “America-first AI,” Europe’s AI Act stumbled, and China expanded its open-weights ecosystem and domestic silicon ambitions.
Safety research entered a new, more pragmatic phase. Models can now imitate alignment under supervision, forcing a debate about transparency versus capability. External safety organizations, meanwhile, operate on budgets smaller than a frontier lab’s daily burn.
The existential risk debate has cooled, giving way to concrete questions about reliability, cyber resilience, and the long-term governance of increasingly autonomous systems.
Source: State of AI Report 2025 / Full Report
🔧 "Just Become a Plumber" - Why That's Terrible AI-Proofing Advice
Geoffrey Hinton and others keep saying "become a plumber" to be safe from AI. CNN reports 70% of tradespeople have tried AI tools, 40% actively use them daily for diagnosis and knowledge work.

Kyle's take: This "plumbers are safe" narrative is dangerous thinking.
Two reasons.
First is increased demand.
I fixed my sink last month using ChatGPT's video mode - showed it the problem, it walked me through every step. That's one less plumber call.
Second is supply.
Plumbers using AI can work faster, meaning we need fewer of them. If one plumber can potentially leverage AI to diagnose, order components and complete documentation twice as fast we need half the plumbers.
It's the same double squeeze hitting white-collar work: supply increases (AI makes workers more productive) while demand decreases (people DIY with AI help).
Sure, the regulatory moat (gas certificates, permits) will slow it down, but thinking trades are obviously safe? That's not thinking through implications enough.
Source: CNN
💶 EU's $1 Billion AI Plan: "Not Even Enough to Enter the Game"
The EU launched a €1.1 billion "Apply AI" plan to boost AI in health, manufacturing, and energy, aiming for "European independence" from US and China tech.
The EU just launched a €1.1B “Apply AI” plan to boost artificial intelligence in key industries like health, manufacturing, pharma, and energy.
The goal is simple but ambitious: build European AI independence and reduce reliance on U.S. and Chinese tech.
Europe finally wants
— VraserX e/acc (@VraserX)
3:09 AM • Oct 12, 2025
Kyle's take: Embarrassing honestly.
Meta hired one 24-year-old for $250 million - a quarter of the EU's entire Apply AI plan. Again: one dude.
A billion euros doesn't get you into high-stakes poker anymore - we're talking hundreds of billions moving toward trillions. You can't build independence without foundation models, and you can't build those for pocket change.
This money will probably disappear into connected pockets for high-cost, low-impact research and think-tank reports. The EU has lost this game already and has no intention of playing properly. It's just China and America now and earmarking €1.1bn won’t change that.
Source: EU Apply AI announcement
Member Question: "I have a B2B marketable niche AI-based app. How do I get it off the ground?"
Kyle's response: Three options: Build an audience (content marketing, takes 6-24 months), Buy an audience (LinkedIn ads for B2B are expensive but targeted), or Borrow an audience (influencers, affiliates).
For B2B specifically, cold outreach on LinkedIn using tools like LinkedIn Helper or Meet Alfred - it's a numbers game. You need a convincing hook to get conversations started.
Subscribe to the Youtube channel to watch the whole show, or catch it live.
Kyle