- AI with Kyle
- Posts
- AI with Kyle Daily Update 139
AI with Kyle Daily Update 139
Today in AI: "Something Big Is Happening"
What’s happening in the world of AI:
Highlights
"Something Big Is Happening" — The Essay Everyone Needs to Read
An article dropped a few days ago from Matt Shumer, CEO of HyperWrite and OthersideAI, and it has absolutely exploded. When I first saw it, the tweet had a couple of million views. It's now at 71 million. Nikita, the head of product at X, commented saying "well done, you've changed the world with a single article." That might be hyperbole, but the piece has clearly struck a nerve, and I think it deserves a proper walkthrough.
The article is called "Something Big Is Happening" and it's long. Very long. But it’s important.
I spent most of the live going through it section by section with my own commentary, because I think it's one of the most important things written about AI for a general audience. The full article is available on Matt's site and has since been picked up by Fortune, Inc., and dozens of other outlets.
I'm going to run through the key arguments and give you my take on each. All credit to Matt Shumer for the article itself.
The COVID Comparison
Matt opens by asking you to think back to February 2020. A virus was spreading overseas, but most people weren't paying attention. The stock market was fine, kids were in school, life was normal. Then over three weeks, the entire world changed.
His argument: we're in the "this seems overblown" phase of something much bigger than COVID.
I have this exact same feeling whenever I talk to people outside our AI sphere. At parties, they'll say things like "oh, I don't really use AI" or "I tried it a couple of years ago, it wasn't that good" or, my favourite, "it can never do what I do."
They'll agree AI can do what other people do, but not them. They're a special petal.
A beautiful flower.
I don't know if that's ignorance or arrogance, but it's going to bite people. In polite company, I don't push it. Matt's been doing the same thing, giving the cocktail party version. But as he says, the gap between what insiders are saying privately and what the public perceives has become dangerously large.
"I Am No Longer Needed for the Technical Work of My Job"
Matt describes his current workflow: he tells the AI what he wants built in plain English, walks away for four hours, and comes back to find the finished product. Not a rough draft. The finished thing. No corrections needed.
This has been the criticism thrown at vibe coding (or “agentic engineering”, as we're now calling it. Maybe!) since the term was coined about a year ago. People kept saying "sure, it can write some lines of code, but it can't do architecture" or "it can't decide what good software looks like." The bar kept rising. And AI kept clearing it. Each time, the coders fall back to the next position: "well sure, it can do that, but it can't do this." Six months later, it can do that too.
This is not exclusive to coding. There's something called the jagged frontier in AI, where progress happens at different rates across different sectors. Coding is advancing fastest because the people building AI are coders and programmers. They're optimising for their own domain. That doesn't mean other industries are safe. It means they're next.
As soon as someone with domain experience in law, or accounting, or medicine combines that knowledge with the current tools, those industries will get swept through. If you're not a coder, you should be watching what's happening in software development as a canary in the coal mine for your own industry.
The Models That Changed Everything
Matt points to February 5th, 2026, when GPT-5.3 Codex and Opus 4.6 dropped on the same day, as the moment something clicked. He describes it as realising the water has been rising around you and now it's at your chest.
I had the exact same experience. Normally when a new model drops, it's "oh yeah, cool, it's fine." The jump from 5.0 to 5.1, or 5.1 to 5.2, felt incremental. 5.3 Codex was different. Opus 4.5 to 4.6 was different. Something shifted. I covered this in my newsletter and lives last thing with a distinct sense of unease.
Ethan Mollick has this concept of the "three sleepless nights." Once you properly grapple with what AI means for you, your job, your family, your community, the economic systems around you, you'll have three nights lying awake going "oh God." If you haven't had those three sleepless nights, you probably haven't thought about this enough. I had a fourth sleepless night last week after those model releases.
Matt makes the point that the plateau debate is over. Anyone still arguing AI has hit a wall either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on experiences from 2024 that are no longer remotely relevant.
AI Is Building AI
This is the part I find most significant. When OpenAI released GPT-5.3 Codex, the technical documentation included this: "GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."
The AI helped build itself. The Codex team is still involved, but the question becomes: will they be needed next time? Could Codex 5.3 be set on the task of creating 5.4 with minimal human intervention?
This is exactly what Geoffrey Hinton has been warning about: recursively self-improving AI. Once the models start meaningfully contributing to their own development, progress compounds. Dario Amodei says we may be one or two years from the point where the current generation autonomously builds the next. Anthropic demonstrated this with the C compiler project: 16 agents, a 200-word prompt, nearly 2,000 Claude Code sessions, $20,000 in API costs, two weeks of near-autonomous operation, and a 100,000-line compiler that can build the Linux kernel. That's where we are. Now.
The Free Tier Problem
Matt makes an important point about the growing gap between what free users see and what paying users experience. The free version of ChatGPT is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating smartphones by using a flip phone.
Hilariously there’s a tweet going viral at the moment of a guy laughing at anyone paying for ChatGPT. When asked if he’s used the modern paid models the answer was nope…GULP.
I use Opus 4.6 and Codex 5.3 on the top plans. I have a $200/month Anthropic plan connected to OpenClaw, Claude Code, and Cursor. That's why I can produce the volume of work I do. There is NO possible way I could be as productive as I am without Claude. End of.
If you're using free ChatGPT and wondering what the fuss is about, you're looking at a fundamentally different product from what's actually available.
What's Coming for Your Job
Dario Amodei, probably the most safety-focused CEO in AI, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. Many in the industry think he's being conservative.
Matt's key insight is that AI isn't replacing one specific skill. It's a general substitute for cognitive work. That’s new. When factories automated, displaced workers retrained as office workers. When the internet disrupted retail, workers moved into logistics. AI doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too.
I shut down my digital marketing agency when GPT-3.5 came out. I used it, thought "it's a bit crap, but it's going to get better," and saw the writing on the wall for marketing copy, ad campaigns, and sales pages. I'm glad I made that jump. I’ve talked to my peers from that time and they are suffering now. And it won’t get easier.
And it's not just about your industry in isolation. Even if your specific role is safe, if other industries are collapsing and people are losing incomes, there'll be less money flowing into your industry too. I spoke to someone who's quite smug about their physiotherapy practice being AI-proof. The physical manipulation, sure. But what happens when your patients can't afford PT sessions because their jobs disappeared?
There are second order effects at play here which means no one will be unaffected.
"Judgment" and "Taste" Are No Longer a Moat
For a while, the defence was: AI can handle grunt work, but it can't replace human judgment, creativity, or strategic thinking. Matt says he used to say this too. He's not sure he believes it anymore.
I had this experience running my Meta ads campaign. I set up 90 video variations and told the AI I wanted to test them all against each other. It said no. It told me my budget wasn't high enough to test that many variations simultaneously, that I'd choke out the ad sets. Instead, it gave me a structured testing schedule: test hooks first, then body content, then full variations. It pushed back on me and told me the better approach.
That's “judgment”.
The AI acted as an advertising expert and told me I was being a dumbass. That did NOT used to happen. That’s new.
What Should You Actually Do?
Start using AI seriously. Not as a search engine. Sign up for the paid version of Claude or ChatGPT. The $20/month plan is a start, though you'll hit Opus 4.6 limits quickly. If you can afford $100/month, it's worth it. Make sure you switch to the best model available rather than whatever the default is.
Push it into your actual work. If you're a lawyer, feed in a full contract and ask it to find every clause that could hurt your client. If you're in finance, give it a messy spreadsheet and ask it to build a model. Don't just ask quick questions. That's the mistake everyone makes. They treat it like Google and then wonder what the fuss is about.
If your company is blocking AI, leave. I don't say this lightly. If your employer is banning or actively resisting AI adoption, that company will not exist in its current form within a few years. It's going to get crushed by competitors using these tools natively. The person who walks into a meeting and says "I used AI to do this analysis in an hour instead of three days" is the most valuable person in the room. Right now, not eventually.
Don't assume it can't do something. Try it. I get people asking me constantly "can AI do this?" and my answer is always: I don't know, try it. Stick it in. See what happens. And remember, if it even kind of works now, it'll probably do it perfectly in six months.
As Matt says this might be the most important year of your career. That window where being early gives you a massive advantage won't stay open long.
This is why I built AI with Kyle - to help people get on the right side of this!
