AI with Kyle Daily Update 112

Today in AI: China wins the Opensource race + Google killing recipe writers

The skinny on what's happening in AI - straight from the previous live session:

Highlights

🇨🇳 China Dominates Open Source AI - Deep Seek, Qwen, Kimi Take Top Spots

Every top open source model is Chinese. Meta's stopped releasing Llama models. America's out of the open source game?

恭喜恭喜中国!

Kyle's take: The winner of 2025's best open source models is China as a whole. Deep Seek R1, Qwen 3, and Kimi K2 take the top spots - all Chinese labs.

These aren't just good, they're absolute bangers every single time, and they're giving them away free. You can download them, fine-tune them, deploy them on your own server - do what you want.

This is very different from OpenAI, Anthropic, and Google who lock everything down and charge. Hoping to recoup the billions they are investing.

It's a soft power play - if you make your models really good and widely available globally, there's a certain cachet that comes from that.

Nathan Lambert predicts no Llama-based models from Meta in 2026. America has basically stepped back from open source entirely. OpenAI's GPT-oss exists but why use when there are better Chinese models?

If you haven't deployed an open source model, try it - use LM Studio on your computer or Locally AI on your phone. It's easier than you think and gives you a fully secured, private model.

The full tier list

🍕 Google AI Summaries Are Killing Recipe Writers' Livelihoods

"Extinction event" for food bloggers. AI merges recipes from multiple creators, causes huge traffic drops, destroys ad revenue.

Why click the source links (right) when you have the info (left) already in Google?

Kyle's take: When you search for recipes now, Google shows the AI overview at the top with the full recipe - no need to click through to the actual website. The publishers who created that content make money when you visit their sites through ads, subscriptions, email captures for retargeting. Now that's all gone.

The Guardian calls it an "extinction event" for recipe writers. This isn't just recipes - it's any information-based content.

What to do? The article reads like we need to stop this. We can’t. The Pandora's box is open, we can't put it back. Even if all the big AI companies stopped tomorrow, the open source models are out there. Anyone can download Deep Seek and deploy it.

So what’s the practical next step? Lean into being human. As one recipe writer said: "People will place a higher premium on knowing these recipes are tested by somebody I follow and respect." That's the key - become known as a person who does something. Make yourself as much of a person as possible to differentiate from AI.

✍️ "I Trained AI to Replace Me, Then Got Laid Off" - Copywriter Reality Check

Entry-level jobs disappearing. Senior roles training AI instead of humans. Career ladders broken.

Kyle's take: Blood in the Machine has a series called "AI Killed My Job" - there’s a new one about copywriters.

One guy, Jacques, said "AI didn't quite kill my current job, but most of my job is now training AI to do work I would've previously trained humans to do." Six months later, he was laid off the week before Thanksgiving.

Once the bots were sufficiently trained to offer "good enough" support, he was out.

Obviously this is just one person’s story BUT I think it’s important to remember that when we hear about “9000 jobs lost at Microsoft due to AI efficiencies” those 9000 jobs are people. Never forget that part, even if you see this as a structural inevitability.

Here's the bigger second-order problem: how are entry-level developers, support agents, or copywriters supposed to become seniors when the experience required to ascend is no longer available? In legal, you do discovery - going through massive documentation, pattern matching. That's what AI is really good at.

Those thousands of hours of grunt work that hone intuition? Gone.

AI doesn't wholesale replace jobs - it chips away at tasks. This is the subtle but important thing that a LOT of people are missing. If your job has 10 tasks and AI can do 7, companies need 3 people instead of 10. Labor is the biggest expense for most businesses. The cost-saving choice is easy and will override any long-term concerns companies have.

Member Questions:

Kyle's response: No, most don't. There's a great video of Boris Johnson saying AI and ChatGPT like an alien:

They might use ChatGPT daily but that doesn't mean they understand large language models, what generative AI is, what it can do for society. And against society.

They get briefed by smart people but it's a dense, complicated subject that sprung up in 3 years and rose to importance faster than technology normally does. Would've required real study to work out what's happening.

If we knew aliens were landing in 3 years, we'd panic then figure out what to do. With AI, everyone's got their heads in the sand - politicians, business owners, employees. We're not thinking about larger ramifications.

Kyle's response: 

Whatever regulation you're trying to put out takes time - that's the problem. The rate at which regulation can be written and signed into law just can't work on AI timescales.

Whatever they have ready will be outdated by the time it comes out. I have no problem with regulating AI - we should. It's just really difficult pragmatically.

The EU's result wasn't particularly impressive, practical, or enforceable. But we shouldn't not do it - it's just a really hard problem to overcome because of the hyper condensed timelines we are now facing.

Kyle's Community Launch: The 5 Day AI Readiness Challenge is now open. Come to https://community.aiwithkyle.com/c/challenge/ to start.

Want the full unfiltered discussion? Join me tomorrow for the daily AI news live stream where we dig into the stories and you can ask questions directly.

Streaming on YouTube (with screen share) and TikTok (follow and turn on notifications for Live Notification).

Audio Podcast on iTunes and Spotify.