AI with Kyle Daily Update 041

Today in AI: China making AI moves + Will Smith's AI blunder

Final Live Webinar: Learn How to Make $1,000-$4,000 an Hour Teaching Businesses Around the World How to Use AI.

📅 Thursday  Check link for timezone 🎟️ Free

The skinny on what's happening in AI - straight from the previous live session:

Highlights

🔥 China Makes AI Mandatory in Schools

China has officially launched mandatory AI education for all school children from age six onwards. Starting Monday the 1st September, every primary and secondary school in China must provide at least 10 hours of AI instruction per year. The curriculum progresses from basic AI applications for 6-7 year olds to full AI workflow training (data preparation to model inference) for middle schoolers, with high school students building actual AI systems. Here’s a breakdown:

Kyle's take: This is amazing and frankly, it shows how far behind we are in the West. While we're still faffing about with whether AI is good or bad, China has 6-year-olds learning to use AI tools and 11-year-olds studying neural networks. That's hundreds of millions of kids who'll grow up native with this technology.

The business implications are massive - in 10-15 years, Chinese companies will have a workforce that doesn't see AI as new tech, but as basic basic basic literacy.

Source: SMCP

🎬 Will Smith's AI Video Disaster Goes Viral

Will Smith released what appears to be an AI-generated tour announcement video that's spectacularly bad. Like…even by bad AI standards…

The video features smeared faces in crowds, people glitching through railings, and text that makes no sense.

Kyle's take: This is either the most tone-deaf celebrity AI usage I've ever seen, or Will Smith is having a laugh and I'm missing the joke. If he added himself eating spaghetti at least I’d know it’s tongue in cheek…

🏛️ China Enforces AI Content Watermarking Laws

New Chinese regulations requiring all AI-generated content to be explicitly and implicitly labelled came into effect yesterday. Major platforms including WeChat, Weibo, and Douyin have implemented features to comply with the law, which mandates both visible markers and embedded metadata for AI-created text, images, audio, and video.

Kyle's take: The intent here is spot on - we need ways to identify AI content to combat misinformation and fraud.

But (and it’s a big but…) the people who actually want to spread disinformation are the ones who'll spend time removing these markers. Normal users get used to seeing AI content properly labelled, which ironically makes unmarked AI content seem more trustworthy.

It's a classic regulation problem - you're mainly restricting the law-abiding folks while the bad actors find workarounds. Still, credit to China for actually doing something (anything!) rather than just talking about it like we do in the West.

Source: SMCP

Member Question from Emin: "Is it easy to add AI voice command from ChatGPT into an app? I want people to be able to edit content within an app with their voice."

Kyle's response: Yes, absolutely doable using OpenAI's API, but the cost is going to be your main hurdle.

I'd recommend checking out artificialanalysis.ai - they have a speech-to-text leaderboard comparing word error rates, speed, and pricing across different models.

Want the full unfiltered discussion? Join me tomorrow for the daily AI news live stream where we dig into the stories and you can ask questions directly.

Streaming on YouTube (with screen share) and TikTok (follow and turn on notifications for Live Notification).

Audio Podcast on iTunes and Spotify.