AI with Kyle Daily Update 083

Today in AI: Senator blocks a Google model?

The skinny on what's happening in AI - straight from the previous live session:

Highlights

💸 OpenAI-AWS Deal: $38B Circular Money Machine

OpenAI signs multi-year $38 billion deal with AWS for hundreds of thousands of Nvidia GPUs. Adds to existing deals with Microsoft (Azure), Oracle, diversifying infrastructure across all major cloud providers.

Kyle's take: This circular economy has me worried. OpenAI pays AWS $38 billion → AWS buys Nvidia chips…

…Next step is Nvidia invests back into OpenAI. Money goes round and round. This happened with Oracle last month - Larry Ellison briefly became richest man on Earth when their stock rocketed.

OpenAI's putting ChatGPT everywhere - Microsoft Azure, Oracle, now AWS - diversifying away from Microsoft dependency.

But here's the problem: they're haemorrhaging money with no real profit plan yet. This is becoming a one-horse race - if OpenAI doesn't deliver, the entire AI market (and maybe global markets when they IPO next year) could be in trouble. Lots of money being pushed in with no clear path to profit. Or at least the magnitude of profit required!

🎁 You get AI, and you get AI!: India Gets ChatGPT, Everyone Gets Perplexity

ChatGPT Go free for 12 months in India starting today. Perplexity Pro still free for 12 months with PayPal account. Both companies desperately juicing subscriber numbers before IPOs…

Kyle's take: This feels like number juicing for investors.

OpenAI launches ChatGPT Go two months ago, immediately gives it away free…

Feels like what Perplexity are doing. I don't know ANYONE who pays - they're always giving it away. With PayPal you get Perplexity Pro free for a year, which gives you access to ALL premium models - GPT-5, Claude 4.2, Gemini, Grok - through one subscription that's FREE. Huh.

Why? My guess - OpenAI and Perplexity are marking these as "paying subscribers" even though 90%+ will cancel before the year's up. It's cheeky accounting to show "sustainable business model" for IPO prep.

🚫 Google Pulls Gemma After Senator's Defamation Tantrum

Senator Marsha Blackburn complains Gemma model falsely claimed sexual misconduct allegations. Google immediately pulls entire model from AI Studio (but keeps API access).

Kyle's take: Hallucinations aren’t news. What IS news here is what happened after these hallucinations.

One senator gets upset about a (admittedly horrible) hallucination and Google pulls an entire model?

Gemma (the Google model) claimed Blackburn had misconduct allegations from 1987 (her campaign was 1998) with made-up sources. Classic hallucination - anyone who uses AI knows this happens.

Google's response was basically "you're using the wrong model, this is for developers not consumers, you don't understand it." BUT then they pulled it anyway to "prevent confusion."

This is the tension - making models available for creators while protecting consumers who don't understand them. Google didn’t fancy this fight. What's scary is one person writing a letter to the CEO can get a whole model pulled. That's not how we should handle hallucinations…

🎮 PewDiePie Built £20K AI Rig: Running Local LLMs at Home

Felix Kjellberg (PewDiePie) built a 10x RTX 4090 setup (~$20,000), running Llama 70B, GPT-OSS, Qwen locally. PewDiePie was one was the OG Youtube video game creators and now is basically semi-retired and completing side-quests for fun.

Created AI council of competing advisors and a swarm of 64 models. All mainly for the fun of it! The video he made has 3+ million YouTube views showing the process. Oh and now he’s fine tuning his own model for a future video.

Kyle's take: So cool - this is a video showing millions that you can run AI locally. Obviously most can't afford £20K of GPUs, but you can do versions of this much cheaper.

He's running protein folding for charity, built a custom web UI for chat/RAG/TTS. And fascinatingly his comments aren't "AI bad" - they're supportive, people wanting to learn.

Want to try this? You don't need 10 x 4090s (thankfully!) - you can run smaller models locally even on a laptop. Grab vLLM or LM Studio, download a model and have at it. It’s actually pretty simple to run an existing model.

Want to train a model? Thankfully still don’t need a $20k+ computer. You can rent server space for a few hundred dollars, train your model, then run it locally.

Combined with Hugging Face's Smol Training Playbook (which I've been working through - it's fantastic), this shows how accessible local AI is becoming. It's less complicated than you think because tools like Hugging Face walk you through step-by-step.

Do I recommend everyone do this? Nope not at all. For most people it’s not at all useful or practical. BUT as a hobby project it’s a fun way to to understand what a behemoth OpenAI does at scale.

Member Question: "If I have zero dev experience, can I learn to vibe code an image gen mobile app?"

Kyle's response: 100% yes. Start with Google AI Studio (ai.studio.google.com/apps) - click Build for free vibe coding with instant Nano-Banana access. Prototype there first. That won’t lead to a mobile app (Android/iOS) but is a good way to build a fast and free prototype with the current best image model.

For mobile apps, move to Replit or Cursor 2.0 (new agent interface is just chat, no scary code view, see last Saturday’s newsletter for details!). Your first iteration won't be good - that's fine, it's learning.

Use ChatGPT/Claude to create your action plan: "I want to build an image gen mobile app, I have no idea what I'm doing, what tools/steps/pitfalls?"

For iOS you'll eventually need Swift/Xcode, but build everything first then move it over. for testing and final refinement.

Want the full unfiltered discussion? Join me tomorrow for the daily AI news live stream where we dig into the stories and you can ask questions directly.

Streaming on YouTube (with screen share) and TikTok (follow and turn on notifications for Live Notification).

Audio Podcast on iTunes and Spotify.