- AI with Kyle
- Posts
- AI with Kyle Daily Update 030
AI with Kyle Daily Update 030
Today in AI: Google Fighting Back + China says "we're good"
The skinny on what's happening in AI - straight from the previous live session:
Highlights
🧠 Google Adds Persistent Memory to Fight Back
Google are on the attack with Gemini, hoping to capitalise on ChatGPT’s rough week.
Google's rolling out automatic memory for Gemini - finally catching up to ChatGPT's killer feature. This puts them ahead of Anthropic who a few days ago added memory - sorta…it’s a watered down version.
Google has also doubled the rate limits for their top-tier DeepThink model, though that's pretty niche since it's only available on their £200+ plan.
Kyle's take: This should have happened ages ago. ChatGPT's persistent memory is brilliant - it remembers what you ask it, what business you run, your tech setup etc.
That creates massive lock-in (“stickiness”) because switching means losing all that context.
Google's finally realised they need this (as do most consumer grade AIs honestly…), but it won't be enough to pull people away from ChatGPT. When you release an AI, it's not enough to be as good - you need to be much, much better.
Source: The Verge
🔍 Trump Launches AI Search Engine (With Perplexity's Help, Sorta)
Trump Media launched "Truth Search AI" built on Perplexity's Sonar API. Early tests show it only sources from conservative outlets like Fox News, which is exactly what you'd expect.
Interestingly, when people tested it, the AI actually contradicted Trump's claims rather than parroting them - same thing happened to Musk when Grok called him out during his spat with Sam Altman.
Kyle's take: Some have been using this to dig at Perplexity but I don’t think this is really about Perplexity being political. Anyone can sign up for their API and build something. Shutting it down for political reasons would be just as problematic!
The funny bit is that even when you try to bias these models (as Trump’s team seems to have), the underlying AI training makes them fairly resistant to pure propaganda.
Source: Mashable
⚠️ Shadow AI: 83% of Legal Teams Going Rogue
Massive survey shows 83% of legal teams are using AI tools not provided by their company, and 81% are using tools that haven't been formally approved. Nearly half have no AI policies in place, yet they're using these tools to draft legal contracts. This isn't just legal - it's happening across all industries where staff have discovered AI works brilliantly but companies haven't caught up…
Kyle's take: Ruh-roh.
This by the way is one reason why I'm getting paid £1,500-£4,000 per hour to train companies right now. Staff have realised AI is super useful for their jobs, but companies are stuck saying "don't use AI" whilst having no proper strategy. The demand for industry-specific AI training is massive because generic training from McKinsey doesn't cut it. If you're in any industry and understand AI, you're sitting on a goldmine. Go help these people!
🇨🇳 China Says "No Thanks" to Nvidia's Compromise Chips
China's telling their companies not to buy Nvidia's H20 chips (the stripped-down version allowed for Chinese market). This is retaliation after Trump demanded 15% of Nvidia's revenue from Chinese sales go directly to the US government. Not a tax, not a tariff but a literal revenue share…something new.
Meanwhile, DeepSeek's next model is delayed because they're trying to train it on Chinese Huawei chips instead of Nvidia, and it's not working. It’s going to get real complex soon.
Kyle's take: Trump's basically running an extortion racket - pay me 15% or you can't sell to China. NVIDIA agreed to this revenue share arrangement. Then China slapped back by indicating they don’t want the chips anyway!
The problem is Huawei's chips aren't good enough yet, which is why DeepSeek can't release their new model. This whole trade war is forcing China to become self-sufficient faster, which long-term probably isn't great for America..
As a side note this “revenue share” agreement (you pay me and I let you exist) is likely to be how the TikTok ban in the US gets resolved. Watch this space.
Member Question from Bo: "GPT-5 was underwhelming. Would you agree, Kyle?"
Kyle's response: Actually, no - I quite like it, though saying that gets you dogpiled these days. The model itself is solid, especially in thinking mode.
The problem is the routing system - you might get sent to GPT-5 High (brilliant) or GPT-5 Minimal (terrible), and there's bugger all transparency about which one you get. Use thinking mode to guarantee the better model. Most people getting rubbish results are probably hitting the crap models without realising it. The persistent memory alone keeps pulling me back to ChatGPT over Claude.
Want the full unfiltered discussion? Join me tomorrow for the daily AI news live stream where we dig into the stories and you can ask questions directly.