AI with Kyle Daily Update 081

Today in AI: Today in AI: Cursor 2.0 + AI shows positive ROI

The skinny on what's happening in AI - straight from the previous live session:

Highlights

🚀 Cursor 2.0 Drops: Agent Interface + Own Frontier Model

Cursor released version 2.0 with multi-agent interface (8 concurrent agents), their own frontier model "Composer" (4x faster, under 30 seconds), built-in browser for localhost, and voice mode.

Kyle's take: So cool - they're bridging the gap between intimidating VS Code and accessible tools like Lovable. The new agent interface is chat-based, and looks like Lovable rather than the old code editor view.

And Cursor has its own model "Composer". It’s frontier-level, trained specifically for coding with codebase-wide semantic search. I’ve given it a run or two and so far it’s solid!

Very exciting is the addition of voice mode. there were plugins we could use before but having native voice is fantastic. Seems basic but voice mode is the essence of vibe coding - just talking to your AI, not even typing.

If you're new, still start with Lovable - it's less frustrating and gets you hooked. But once you're comfortable Cursor is incredible. This update makes the graduation from Lovable to Cursor happen much earlier.

💼 Wharton Study: 74% See Positive ROI (MIT Was Wrong)

New Wharton study shows 74% of enterprises seeing positive AI returns, 82% using it weekly, 46% on a daily basis. Tech/telecom at 88% ROI, banking/finance 83% seeing positive ROI.

Kyle's take: Remember that MIT "study" claiming 95% of AI projects fail? Total nonsense. They only looked at top-down flagship projects, ignored grassroots usage. Only 40% of companies provided AI to staff, but 90% of staff were using it anyway - they got their own! When AI is used at grassroots level for daily tasks, it succeeds 83% of the time.

The whole thing stunk. And didn’t align with what we are seeing day to day - the massive productivity gains you and I are making with AI.

This new Wharton study confirms what we already knew - AI works when you let people figure out how to use it for their actual jobs, not when leadership decides on some massive flagship CRM overhaul. The real ROI is in document summarisation (70%), report creation (68%), data analysis (73%). Not sexy flagship projects but everyday boring stuff. Train your staff, give them tools, let them work out where it's useful and fits into their workflows.

🧠 Anthropic Finds "Signs of Introspection" in Claude

Anthropic research shows Claude demonstrating genuine (though limited) introspective capabilities - can recognise its own internal states, not just confabulating plausible answers.

Kyle's take: This is worth listening to because it's Anthropic. If OpenAI said this, I'd be wary. But Anthropic are pretty conservative, they don't hype.

Anthropic tested a technique called concept injection to see if language models can “sense” their own internal states. They first identified the neural pattern linked to a specific idea—like “shouting”—and then secretly activated that pattern inside the model without changing the input text. If the model could introspect, it might report something unusual, like feeling more intense or emotional. In practice, advanced models like Claude 4 sometimes detected the injected concept, suggesting a basic form of self-monitoring, though it’s still inconsistent and far from genuine self-awareness.

BUT - and this is crucial - Anthropic have no idea HOW it works. Just some educated guesses.

It's emergent behaviour they don't understand. Like the human brain - inputs go in, something happens, outputs come out. We don't know the mechanics. Does this mean AI consciousness? They explicitly say NO. But remember we don't even have a solid definition of human consciousness after millennia of philosophy and religion arguing about it. How can we judge AI consciousness when we can't define our own?

🎬 Sora Opens Registration (Limited Time) - US, Canada, Japan, Korea Only

OpenAI temporarily opened Sora registration without invite codes for users in US, Canada, Japan, and Korea. Limited time offer with no specified end date.

Member Question: "Can AI run a government?"

Kyle's response: Nope! Maybe in the future, but not right now. It can HELP governments though. Albania's interesting - they have an AI minister called Diella for anti-corruption in procurement. Unfortunately the PM keeps anthropomorphising her, saying she's "having 83 children" (agent instances across departments). Very sensational but it's an interesting experiment. We'll see lots more AI integrated into government, probably without the face and name. Albania's just doing it earlier. The hype doesn't help - the PM really needs to tamper it down, but that's part and parcel with AI now.

Want the full unfiltered discussion? Join me tomorrow for the daily AI news live stream where we dig into the stories and you can ask questions directly.

Streaming on YouTube (with screen share) and TikTok (follow and turn on notifications for Live Notification).

Audio Podcast on iTunes and Spotify.