AI with Kyle Daily Update 085

Today in AI: Sam snaps + AI in space

The skinny on what's happening in AI - straight from the previous live session:

Highlights

🔍 Gemini 3 Size Leaked: 1.2 Trillion Parameters for Siri

Bloomberg accidentally (?) revealed Apple's Siri will use Google's 1.2 trillion parameter model (likely Gemini 3) launching March 2026. For comparison, GPT-5 estimated at ~1.7 trillion parameters.

Kyle's take: Bloomberg messed up and leaked something the AI labs never reveal - model parameter size. This gives us our first glimpse at Gemini 3's scale. 1.2 trillion parameters is chunky. We’re unsure if this is the Flash, Pro or the Ultra model. If Flash or Pro it’s a LOT, if Ultra it’s not that many. All relative!

For context: open source models like Qwen are 235 billion, GPT-OSS is 120 billion. GPT-5 is estimated at 1.5T or so but we don’t know for sure. Generally more parameters == better model. But that’s a big oversimplification. Parameters aren't everything - efficiency matters - but this suggests Gemini 3 is pretty powerful.

And it’s being added to Apple’s Siri. After 14 years, Siri's finally becoming an actual LLM instead of basic NLP. The fact Apple chose the cheaper option over Claude still bothers me a little but Google's clearly cooking something.

🚀 Google Plans Space Data Centres: Solar-Powered AI Training?

Google announcing plans to launch AI data centres into space by 2027. They stuck their TPUs (fancy GPUs) in particle accelerators for radiation tests simulating low Earth orbit. 24/7 solar power for training unlocked?

Kyle's take: This sounds like sci-fi nonsense but it’s backed by real science.

The sun emits 100 trillion times humanity's total energy production. That’s a lot of energy. Space solar panels get 24/7 sunlight, no atmosphere, no weather, no night-time.

One of the largest current limitation for AI training is electricity - this solves it. China's building massive solar farms here on Earth but they have an easier time, ahem, “reclaiming” the land for it. The West doesn’t have this luxury - meaning building large solar arrays gets expensive fast.

This is humanity moving towards Kardashev Type 2 civilisation - harnessing our star's power. Type 1 uses a planet's resources (we're at 0.7-0.8 according to Carl Sagan RIP), Type 2 uses the sun, Type 3 uses a galaxy ie. 100+ billion stars.

The challenges: thermal management (keeping TPUs cool while pointing at the sun!) and getting the trained models back - probably shuttles carrying hard drives unless we invent data wormholes!

People concerned about space debris and junk? Remember space is big. We can (and will very likely) just blast the leftovers into the void once we’re done. Classic humans!

Carl Sagan would be proud of us

💰 OpenAI's Trillion Dollar Problem: Sam Altman Sick of Revenue Talk

When questioned by Brad Gerstner about OpenAI's $1 trillion IPO valuation vs tens of billions revenue, Sam Altman snapped back in front of Microsoft's CEO.

“How can a company with $13 billion in revenues make $1.4 trillion of spend commitments?” Gerstner asked Sam Altman.

“If you want to sell your shares, I’ll find you a buyer. Enough…I think there’s a lot of people who talk with a lot of breathless concern about our compute stuff or whatever that would be thrilled to buy shares…We could sell your shares or anybody else’s to some of the people who are making the most noise on Twitter about this very quickly.”

Kyle's take: This was extraordinary - instead of CEO bullshitting about exponential growth or AGI paradigm shifts, Sam just snapped at investors. In front of Satya Nadella! Microsoft owns ~28% of OpenAI after the for-profit conversion, they're who Sam needs to convince.

The mismatch is real - asking for history's largest IPO while making no profit. Yes, startups often show no profit for years. Amazon took ~10 years to profit, but that was strategic tax avoidance while crushing competition. Both online and retail. OpenAI can't dominate like Amazon did - we can switch AI platforms easily. And the hole they are digging makes Amazon look like a lemonade stand…

We're probably in a bubble that'll make 2007 look like child's play. BUT remember the technology is sound - 10%+ of humanity uses AI daily because it's genuinely valuable. Not because it’s overhyped. The tech survives even if OpenAI doesn't.

Source: Futurism

💻 Vibe Coding Explained: Your Gateway Drug to Building

Kyle's response: Term coined by Andrej Karpathy in February/March this year - basically AI-assisted coding with less human care. Hence “vibing”! I literally talk into my microphone: "I want an interactive quiz that blurs results until they give email, then sends to my newsletter."

Previously I'd write code or find existing code on Stack Overflow. With vibe coding, you chat (voice or text) and AI builds for you. It accesses your computer, creates folders, builds applications - web apps, phone apps, desktop apps.

Lovable is simplest (just a chat box), Cursor is more complex but powerful, Claude Code runs in terminal. But it’s all still just natural language. You don't write code anymore. Programmers hate it because it threatens jobs, but now anyone can build software.

Kyle's response: I'm not a coder but I do build and sell software. Start with vibe coding - it's the gateway drug. Traditional learning is abstract - 20-hour courses about dictionaries and tuples where you build nothing.

With vibe coding, you immediately build something real, get excited, show people. Your first projects will be rubbish - buggy, insecure, break after weeks! That's fine, they're toys.

Once hooked, you'll naturally need to learn GitHub, deployment, architecture. It's just-in-time learning, not just-in-case. Use AI to help: "I need to set up a GitHub repository, no idea how, can you help?"

Start with Google AI Studio (free), then Lovable ($25/month), then Cursor, then Claude Code. Graduate as needed.

Kyle's response: Yes for sure. But complexity means more “mess”. And more chances your project will just collapse under its own weight.

That said, six months ago we could do MUCH less than we can now. Now much more complex. Like all AI vibe coding is progressing very rapidly. So my answer today will be different to 6 months from now.

The issue with building larger projects is that AI focuses on specific tasks and forgets the rest of the codebase - fixing a button might break the whole app. It’s focusing on the trees and ignoring the forest so to speak.

As you advance, focus on architecture - how “pieces” work together. Build modules separately, then connect. Start simple as you won't know what AI got wrong at first.

The good news is that as your skills grow, thankfully the tools are improving too. In 6-9 months when you're ready for complex architecture, the tools will be there. They are getting better week to week so now is a great time to jump in. We'll stop calling it "vibe coding" soon - so many engineers already use AI assistance, it's just becoming "coding."

Want the full unfiltered discussion? Join me tomorrow for the daily AI news live stream where we dig into the stories and you can ask questions directly.

Streaming on YouTube (with screen share) and TikTok (follow and turn on notifications for Live Notification).

Audio Podcast on iTunes and Spotify.