AI with Kyle Daily Update 035

Today in AI: Is AI Boom Over, 95% of AI Projects Are Failing, "Seemingly Conscious AI"

The skinny on what's happening in AI - straight from the previous live session:

Highlights

📉 AI Stocks Tank + Zuckerberg's Hiring Spree Reversal?

US tech stocks took a battering yesterday - Nvidia dropped 3.5%, Palantir tanked 9.4% (good, ghouls). Confidence is waning - FUD is on the rise.

How does this track with Meta throwing around $250 million compensation packages to poach AI talent (one 24-year-old turned down $125 million because it wasn't enough). Well…now they're quietly restructuring their AI division and looking at downsizing. This is only weeks after the massive spending spree.

Kyle's take: Investment bubble? Absolutely. Technology bubble? Not a chance.

This is classic market psychology - greed swinging to fear. And the journalists are practically gleeful about this correction.

The technology's still brilliant, but companies are throwing money around without proper strategy. Including, apparently, Zuck.

Remember: even if AI stopped improving right now, we'd still spend years learning to use what we've got. Think internet in the 90s - took ages before we built proper e-commerce and social media on top of it. Same principle applies here.

The financial correction is healthy; it'll strip away the nonsense and leave us with businesses that actually work.

Source: Financial Times 

🎯 95% of AI Projects Are Failing (But Hold On)

MIT study surveyed 300 AI deployments and found that 95% deliver no measurable impact. Only 5% achieve "rapid revenue acceleration." Sounds disastrous, right?

Kyle's take: This isn't as dramatic as the headline suggests. And very few people have actually read the paper. It’s not easy to get access to!

Here it is! MIT Paper. I figured I’d make it easier.

First, we can prototype and pilot AI projects ridiculously quickly now - I can spin up a working prototype in Lovable within hours. That means we're also ditching failures faster, which is actually healthy. Previously, you'd have meetings, Scrum workflows, multiple teams just to get a pilot off the ground. Now you can test an idea and bin it by teatime if it's rubbish.

We're also comparing apples to oranges - how many non-AI business projects succeed? I did a little digging and adoption rate of projects (it’s a wide term here!) is around 20-30%. So before AI 70-80% “failed”. Sure…that’s less than the 95% but remember that AI is very experimental for organisations.

Third, the really big takeaway of the paper is that 90%+ of workers are using AI at work even though only 40% of companies have made it available. Huh. It’s shadow IT. It’s also a huge opportunity for the companies if they are smart here. Instead of running top down pilot projects (which seem to be failing!) find out what your employees are using AI and build from grassroots upwards.

🗣️ Eight Seconds Gives Woman Her Voice Back

Brilliant story from the BBC: Sarah Ezekiel lost her voice to motor neuron disease 25 years ago and has been speaking through a robotic voice ever since. Her children never heard her real voice. But AI has now recreated her natural speaking voice from just eight seconds of scratchy VHS audio, and she can control it with eye-tracking technology.

Kyle's take: This is the stuff we often forget about AI. Forget ChatGPT writing your emails - this is life-changing technology. The fact they could rebuild a human voice from eight seconds of audio that was probably recorded decades ago is nothing short of miraculous.

Also, more interestingly the coverage hasn’t gone out of its way to highlight that this is an AI breakthrough. AI is becoming normalised in scientific breakthroughs rather than sensationalised. As well it should - it’s another tool in a researcher’s arsenal.

Source: BBC News

🧠 "Seemingly Conscious AI"

Mustafa Suleyman (Microsoft AI boss) just dropped a essay coining "seemingly conscious AI" or SCAI. He's talking about AI that has all the hallmarks of consciousness - it would be indistinguishable from talking to a conscious being, but internally it's blank. Think philosophical zombie but with silicon.

Kyle's take: This is (in part) a reaction to the GPT-4o debacle where people got genuinely attached to the AI and were devastated when OpenAI yanked it away.

Suleyman's spot on - we need to build AI for people, not to be people. The problem is we're hardwired to anthropomorphise things that seem human-like. When an AI remembers your birthday and asks how your mum's doing, your brain treats it as a relationship even though it's just pattern matching. We’re just wired that way.

The scary bit? AI psychosis is already happening. Some people are losing touch with reality because their AI companion feels more real than actual humans.

Highly recommend giving the essay a full read.

Member Question from Alexander: "How can I sell AI automations to mid-size and small businesses? Which industry is best to target?"

Kyle's response: Start with whatever industry you already know. Don't chase sectors just because they're hot. If you've worked in manufacturing for ten years, start there - you'll understand their actual problems.

The key isn't throwing AI at random problems; it's finding real business problems that AI can actually solve. Some problems need human solutions, some need basic automation, some need proper code. AI isn't always the answer.

Find the problem first, solve it for a few people (maybe for free initially), build up case studies, then scale to similar businesses. But you've got to understand the industry pain points before you can fix them with any technology.

This question was discussed at [18:15] during the live session.

Want the full unfiltered discussion? Join me tomorrow for the daily AI news live stream where we dig into the stories and you can ask questions directly.

Streaming on YouTube and TikTok (follow and turn on notifications for Live Notification).

Audio Podcast on iTunes and Spotify.