
AI with Kyle Daily Update 149 Today in AI: Pentagon Victory? Kyle Balmer February 27, 2026 What’s happening in the world of AI: https://youtu.be/6JlRAhnaoAk...

AI with Kyle Daily Update 148 Today in AI: Pentagon vs. Anthropic Kyle Balmer February 26, 2026 What’s happening in the world of AI:...

Watch this video on our dedicated watch page
Better viewing experience with related videos and full-screen player
Live Webinar: Learn how you can become an AI Trainer… get paid $1-$4k per hour …without needing a PHD in AI, or being a TED Talk level public speaker… 📅 Thursday ⏰ Check link for timezone 👉️ Register here
The skinny on what's happening in AI - straight from the previous live session:
The GPT-5 backlash was so fierce that OpenAI has quietly brought back GPT-4o as a "legacy model." You can now access it in settings if you're on a paid plan. Turns out nuking all the old models without warning wasn't the brilliant move they thought it was…

Kyle's take: This is pretty embarrassing for OpenAI. They made this big song and dance about simplifying everything with one unified model, then immediately caved when Reddit and Twitter went mental.
But the real story is Sam Altman's damage control - he put out his longest tweet ever about "attachment" to AI models. Turns out people weren't just upset about functionality - they'd formed emotional bonds with 4o as a companion, therapist, friend.
4o supported people. GPT-5 isn’t here to play. It’ll call out your BS. Which makes it great for coding and business use but not so much as a supportive companion who now tells you to pull yourself together!
Sam's basically admitting they didn't expect people to fall in love with their AI, and now they're scrambling to figure out the ethics of that. The real issue isn't the model quality - it's that they accidentally created digital relationships and then killed them overnight.
I highly recommend reading Altman’s statement on this. He doesn’t usually comment at such length - this is important for him. Here’s the tweet:
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly
— Sam Altman (@sama)
12:37 AM • Aug 11, 2025
Here's what's actually “wrong” with GPT-5. Outside of the whole 4o being more supportive debate. Under the hood, there are multiple sub-models: GPT-5 high, medium, low, and minimal.
The system decides which one handles your query, but the routing is completely broken. Sometimes you get PhD-level responses, sometimes you get something barely better than Llama 4.
Why? Cheaper for OpenAI. But at what cost to their reputation.

Kyle's take: This kinda explains most people’s problem with the new GPT-5. You've got no transparency into which model is answering your question. One minute it's brilliant, the next it's useless, and you can't predict which you'll get.
It's like having a team of assistants where one's a genius and another's an intern, but you never know who's going to answer the phone. This makes the user experience very frustrating.
NVIDIA and AMD have agreed to pay 15% of their China H20 chip sales directly to the US government. Let's call this what it is - a bribe. Trump said these chips can't go to China for "national security," but apparently that security concern disappears if you pay the government enough money. It’s plain ol’ racketeering - Al Capone would have been proud.
Kyle's take: This is scandalous. Especially because it’s going ahead with (seemingly) little opposition.
Either you have a national security problem or you don't - paying 15% of revenue doesn't magically eliminate security risks. It's like customs officials taking a cut to let dodgy shipments through.
Jensen Huang wants that Chinese market so badly he's literally paying protection money to the US government. He’s being shaken down. And he paid up.
Now watch when Trump cranks that % up further down the line… following the protection racket playbook.
Source: BBC
Microsoft announced GPT-5 integration across all their products, and Elon Musk jumped in with "OpenAI is going to eat Microsoft alive." Satya Nadella fired back beautifully: "People have been trying to eat Microsoft alive for 50 years. Excited for Grok 4 on Azure and looking forward to Grok 5." BEAUTIFUL, especially considering Grok is actually hosted on Microsoft's servers.
OpenAI is going to eat Microsoft alive
— Elon Musk (@elonmusk)
5:34 PM • Aug 7, 2025
Kyle's take: So…….Elon's got a point! Microsoft is betting their entire AI strategy on a scrappy startup they don't actually own. Yes, they've got a right to 49% of profits (with a cap at 10x their investment) on a $13 billion investment, but they're essentially letting OpenAI infiltrate every part of their ecosystem. It’s at the core of their strategy now and allowing this without owning the tech introduces risk.
If that relationship goes sour, Microsoft could get hollowed out. It's a risky position for such a massive company to be in.
Member Question from TT: "Is the risk larger for OpenAI or Microsoft in their partnership?"
Kyle's response: Microsoft has massive cash reserves - they can weather problems and potentially pivot to Anthropic or even build their own models. OpenAI doesn't have that luxury. They're burning cash and years away from profitability. Microsoft could survive losing OpenAI; I'm not sure OpenAI could survive losing Microsoft. The aggressor is usually more vulnerable than the established player with deep pockets.
This question was discussed at [46:00] during yesterday's live session.
Want the full unfiltered discussion? Join me tomorrow for the daily AI news live stream where we dig into the stories and you can ask questions directly.
Make us a Preferred Source on Google and catch more of our coverage in your feeds.
AI with Kyle Daily Update 147 Today in AI: Anthropic declare AI war Kyle Balmer February 25, 2026 What’s happening in the world of AI:...