- AI with Kyle
- Posts
- AI with Kyle Daily Update 109
AI with Kyle Daily Update 109
Today in AI: First LLM in Space + AI or Cancer cure?
The skinny on what's happening in AI - straight from the previous live session:
Highlights
🍟 🚀 First AI Model Trained in Space: NanoGPT Goes Orbital
StarCloud launched an H100 GPU into space and trained Andrej Karpathy's model on Shakespeare. Data centres in space might be the future.
Kyle's take: They've done it - humanity's first large language model trained in space! StarCloud sent up an Nvidia H100 GPU on their StarCloud-1 and trained Andrej Karpathy's NanoGPT on the complete works of Shakespeare.
Very cool that it's Andrej's model getting this honour! Well deserved.
Is this just a fun little gimmick? Nope. A lot of AI companies are talking about putting data centres up into space and training their models there. The promise is limitless energy - solar panels in space get 24-hour sun, no atmospheric filtering.
Critics rightly ask why not just build solar farms on Earth? Here's the real reason: sovereignty and speed. If you launch a satellite, nobody can mess with it. Building a solar farm in the UK takes 10 years of planning permission and environmental studies. By then everything's moved on.
China doesn't have this problem - they're coating deserts with panels. But in the West? Red tape everywhere. Blast a data centre into space and you cut through all that (once it’s off the ground at least…)
What about cooling? I’m seeing a of people saying that GPUs in space makes sense because space is cold. Well..yeah…it is. BUT it’s also a vacuum. And in vacuum we lose 2/3 heat dissipation methods. Heat loss occurs via conduction, convection and radiation. The first two don’t occur in vacuum. So it’s all on radiation! Doable but not the solved engineering problem many assume.
Source: StarCloud announcement, Andrej's NanoGPT
🤦 Scientist Says Don't Cure Cancer Faster If It Means AI Takes Jobs
NeurIPS panel protest: "I do not think it worth it to find a cure for cancer faster if that means we can never do science again.” Highly recommend reading the FULL discussion in the tweet below OR blog version.
“Kyle's take: Julian Togelius stood up at a NeurIPS panel and said AI replacing scientists is "evil" because young researchers love their jobs.
When asked if he'd prefer the joy of doing science to finding a cure for cancer faster, he said yes - we'll cure cancer eventually, but it's not worth it if humans can't do science anymore.
To me…this is the ultimate selfishness. Try explaining to someone who's lost family to cancer that you'd rather it take decades longer so researchers keep their jobs. Look at AlphaFold - human teams were stuck at 30-40% protein folding accuracy for a decade. DeepMind came in, hit 60% in 2018, then 90% in 2020, folded all 2 billion proteins in a month, and released the data free. The solved the problem entirely - it’s done. Using AI. That enables new drugs, plastic-eating enzymes, massive breakthroughs.
Would've taken humans decades more. We need to get out of our own way sometimes.
As always, apt.
Source: Julian's Twitter thread
🛡️ Pentagon Forms AGI Committee - They're Taking This Seriously Now
Department of Defense mandated to create steering committee on artificial general intelligence. It's not "if" but "when" now.

Kyle's take: The Pentagon leadership is forming a panel on military implications of AGI. People are freaking out like "what does the Pentagon know?" but I think it's basically civil servants trying to get a handle on what's coming….
The worrying part is they're only just starting to think about this now. We'll probably have something approximating AGI in the next few years, not decades.
At least their Chief Digital AI Officer, Dr. Doug Matty, has actual technical background - BS in computer engineering, MS in applied math, doctorate from MIT. Unlike the UK where we give AI roles to politicians who were doing agriculture last year…
It's a committee getting together to work out what to do - which is hard since we don't even know what AGI's capabilities will be. But the fact they're forming this committee suggests AGI is now being taken as a credible near-term reality, not sci-fi.
Source: DefenseScoop coverage
đź’§ The Real vs Fake AI Risks: Water Usage Is A Red Herring
An AI skeptic calls out fake reasons to hate AI when real ones exist. Stop with the water bottle videos. I agree!
Kyle's take: Why make up fake reasons to dislike AI when there are real ones? The fake problems: AI is fascist, water usage (those viral videos of people pouring water saying "that's one ChatGPT query" - it's nonsense), energy use at current margin is tiny (2-3% of global electricity).
The REAL problems he lists: existential risk, misuse for weapons/deepfakes, people getting lazier, massive job loss (anyone saying otherwise is burying their head in sand), brightest minds not working on other problems, gradual disempowerment, geopolitical risks with Taiwan making all GPUs.
The list goes on. In fact Aaron’s tweet acts a good summary of the risks and is worth checking out. Even if I don’t agree with them all.
AI is different from previous tech because it's not just a tool - it can operate tools, making it a direct replacement to us.
But we can't discuss real issues when people are sharing TikTok-level fake problems about water bottles. Context matters. We are using a TONNE of water on other uses - alfalfa production uses 80% of the Colorado River's water. Fast fashion, meat production, golf courses in Saudi Arabia. And so on!
If you're mad about AI water use but not these, you've already decided you hate AI and are just rationalising.
Source: Aaron's analysis
Member Questions:
"If I start a new ChatGPT chat, will it know what I was previously discussing?" About memory features
Kyle's response: Yes, if you have persistent memory turned on! Go to Settings > Personalisation > Manage Memory. You can allow it to reference saved memories, browser memories (things it looked up), and chat history.
This is why I use ChatGPT for personal stuff and Claude for work - ChatGPT knows about my business, personal life, travel plans, what video games I'm playing. When I start a new chat, it has all that context already. Claude has memory now too but it's not as fully fleshed out. If you don't want it remembering, use temporary chats - they don't appear in history or update memory.
Kyle's response: No idea, and anyone who says they do is making it up. Let's get to AGI first! The argument that ASI comes shortly after AGI is compelling though - once we have recursively improving AI that can work on itself, it'll probably figure out ASI very quickly. At least much faster than we could!
We won't know we've hit AGI until afterwards - we'll look back and say "oh yeah, it was January 2026." Humans are really good at moving goalposts. "AI will never play chess" - Deep Blue beats Kasparov. "OK but never Go" - AlphaGo beats Lee Sedol. "Fine but it can't drive" - Waymo exists. We keep redefining what's "purely human" every time AI does it.
Kyle's response: No! If we get there, it's probably the most important technology humans have ever created. It will make the internet and computing seem like toys. The internet is basically training data for AGI.
There's overhype in the economic market - dumb money being thrown at anything with "AI" in the name. But the technology itself won't be overhyped. It's gonna be more than anything we could imagine and bring up really big questions about humanity and where we stand in the universe.
Kyle's Community Launch: The 5 Day AI Readiness Challenge is now open. Come to https://community.aiwithkyle.com/c/challenge/ to start.
Want the full unfiltered discussion? Join me tomorrow for the daily AI news live stream where we dig into the stories and you can ask questions directly.
