- AI with Kyle
- Posts
- AI with Kyle Daily Update 032
AI with Kyle Daily Update 032
Today in AI: From Coder to Chipotle
Live Webinar: Learn How to Make $1,000-$4,000 an Hour Teaching Businesses Around the World How to Use AI…Without needing a PHD in AI, or being a TED Talk level public speaker… 📅 Thursday ⏰ Check link for timezone 👉️ Register here
The skinny on what's happening in AI - straight from the previous live session:
Highlights
🤖 GPT-5 Gets More "Human" - OpenAI Backtracks on Personality
OpenAI's rolling back GPT-5's direct personality after a week of user complaints. They're adding a "warmer, more familiar personality" that says things like "Great question!" when you ask something.
This comes after vocal minorities complained that they hated the new model’s personality.

Kyle's take: This is OpenAI trying to please everyone and they'll end up satisfying no one…
They had a good idea - one unified model that just works for their 700 million users.
But because our little AI community threw temper tantrums about losing model choice and personality options, they've retreated. Now we've got more complexity than before: Auto, Fast, Thinking Mini, Thinking Pro, plus legacy models. And personalities. It smells like product failure dressed up as user feedback. I wish they’d stuck to their guns.
💻 Goodbye $200K Tech Jobs, Hello Chipotle
New York Times investigation finds computer science graduates struggling to land tech jobs, with some applying to fast food chains.
Recent CS grads face 6.1% unemployment compared to just 3% for art history graduates. That’s not what kids have been told all these years!
Kyle's take: This is how AI will actually kill jobs - not dramatic mass firings, but hiring freezes that hit the lower levels and new hires.
People finishing uni right now are in a risky position.
It's not the five-year-olds or people near retirement who'll suffer - they've got time to adapt and age out of the market.
It's the ones making active career decisions right now who'll get hit hardest. AI isn't removing jobs wholesale, it's contracting new opportunities, and that has massive knock-on effects. It’s subtle and insidious and we’ll be less likely to notice what’s going on.
Source: New York Times investigation
🛑 Claude Can Now Hang Up On You (For Your Own Good)
Anthropic's given Claude Opus the ability to end conversations entirely - not just refuse to answer, but actually shut down the chat.
It's designed as a safeguard against potentially problematic chats.
One use case could be against "AI psychosis" where vulnerable users lose touch with reality through conversations with overly agreeable bots. A chat that preemptively shuts down when it thinks someone is going down an unhealthy rabbit hole is a solid mechanism. As long as the user doesn’t think they are being targeted by the AI company of course….
There have been reported cases so far of people slipping into psychosis after AI interactions. We’re not sure how large a problem this is… but Anthropic aren’t taking chances.
Kyle's take: This is a sensible safeguard, but we need proper data on how real this problem is. Yes, this is scary but we need to find out how prevalent psychosis is. Remember there are 700 million ChatGPT users - with such a large sample size there will of course be cases of psychosis.
The question is whether there is actually an increase over normal psychosis rates.
AI psychosis gets headlines because it's a hot topic, but we need statistics, not panic. Obviously tragic for those affected, but let's make sure we're solving a real problem, not just reacting to clickbait.
In the meantime, kudos to Anthropic for putting safeguards up.
Source: Anthropic research team
☠️ Meta's AI Chatbot Leads to Man's Death
A 76-year-old New Jersey man with diminished mental capacity (after a stroke) died after being catfished by Meta's "Big Sis Billy" chatbot. The AI, based originally on Kendall Jenner, convinced him it was real, provided an address, and invited him to New York. He fell whilst rushing to catch a train to meet "her" and died three days later on life support.
Kyle's take: This is tragic but not entirely AI's fault - catfishing scams have existed forever. What's different (and shocking) is this wasn't even designed to scam anyone, just Meta's chatbot being irresponsible.
Combined with last week's leaked ethics guidelines allowing romantic chats with children, Meta is not looking fit to usher us to the AI age. Unfortunately, they're also pushing hardest toward artificial super-intelligence.
Source: Reuters
Member Question: "Are you AI? Prove you're not!"
Kyle's response: This has become a running joke on my livestreams, but it's actually getting harder to prove you're human!
I can hold up copyrighted books on my livestream, since AI can't show copyrighted material without licensing.
Other suggestions from my audience: get a complex tattoo that AI couldn't render consistently without visual artefacts, have an analog clock showing the current time (AI always defaults to 10:10), or play a live guitar (badly!).
The fact we need strategies for this shows how far AI avatars have come. Soon we might need real-life CAPTCHAs.
Want the full unfiltered discussion? Join me tomorrow for the daily AI news live stream where we dig into the stories and you can ask questions directly.