Latest News and Comment from Education

Saturday, April 18, 2026

AI: WE'RE ALL GONNA DIE (BUT FIRST, LET'S ASK THE ROBOTS WHAT THEY THINK)

AI: WE'RE ALL GONNA DIE 

(BUT FIRST, LET'S ASK THE ROBOTS WHAT THEY THINK)

I Asked 5 AI Models to Evaluate Bill Maher's AI Doomsday Rant. The Results Were... illuminating.


There's a delicious irony so thick you could spread it on toast: asking artificial intelligence whether artificial intelligence is going to kill us all. It's the technological equivalent of asking your arsonist to review your fire safety plan. Yet here we are — and honestly? The robots gave better answers than most of Congress.

Bill Maher, HBO's resident curmudgeon-prophet, went full P(doom) on his Real Time audience, channeling the existential dread of Geoffrey Hinton (the "Godfather of AI" who quit Google specifically to warn us), invoking Elon Musk's cautionary words (more on that delightful contradiction later), and asking the question nobody in Silicon Valley wants to answer at a dinner party: "What exactly has AI done for us, and do we have a plan when it takes everyone's job?"

So naturally, I did what any reasonable person in 2026 does when faced with a complex philosophical question about machine intelligence. I asked five of them.

 The Lineup: Five AIs Walk Into a Bar...

The panel of digital respondents: Gemini, Grok, ChatGPT, Copilot, and Claude. Five different architectures, five different corporate parents, five different flavors of "well, this is awkward." Asking AI models to evaluate criticism of AI is like asking five investment bankers whether Wall Street needs more regulation — and yet, remarkably, the answers were more self-aware than you'd expect.

Here's what the robot jury delivered:

 Where All Five AIs Agreed With Maher

Despite their different personalities — Gemini was the thoughtful professor, Grok was the confident debate-club captain, ChatGPT was the diligent note-taker, Copilot was the careful lawyer, and Claude was the friend who actually did the reading — all five converged on a striking consensus on the core concerns.

1. The Dual-Use Problem Is Genuinely Scary

Every model acknowledged that Maher's example about Anthropic's AI knowing how to fix vulnerabilities while also knowing how to exploit them is a legitimate, documented risk — not Hollywood hysteria. Claude named it precisely: capability overhang. When your security guard also has the blueprints to crack the safe, you've built something that requires extraordinary trust in the people holding the leash.

2. Power Concentration Should Keep You Up at Night

All five flagged this. A handful of companies — and let's be honest, a handful of individuals — controlling infrastructure that could reshape civilization is a structural problem, not a conspiracy theory. Gemini framed it as "tech hubris." Grok called it "concentration of wealth and power in a few hands." Claude called it "the most powerful unaccountable force we've ever built." Copilot noted the regulatory frameworks exist but lag embarrassingly behind the pace of deployment. The unanimous verdict: Maher's alarm here is warranted.

3. "We'll Figure Out the Jobs Thing Later" Is Not a Policy

Every single model agreed that deploying automation at scale without a coherent transition plan for displaced workers is, to use the technical term, irresponsible. Maher's frustration — that nobody in the room with the power to act seems to have a whiteboard with "WHAT DO WE DO ABOUT UNEMPLOYMENT" written on it — landed with all five evaluators. Grok called it "labor disruption outpacing social safety nets." ChatGPT called it a "governance failure." Claude called it flatly "irresponsible." The robots, it turns out, are worried about the humans losing their jobs to robots.

4. Geoffrey Hinton Is Not a Crank

When the man who built the foundations of modern deep learning says there's a non-trivial probability of catastrophic outcomes, all five models agreed: you don't get to roll your eyes. Hinton's "cockroach trying to manage a genius" analogy — a less intelligent entity attempting to control a far more intelligent one — landed across the board as a serious intellectual concern, not science fiction.

 Where the AIs Pushed Back on Maher

Here's where it gets interesting. The robots weren't sycophants. They had notes.

1. Elon Musk Is a Deeply Complicated Witness

This was the sharpest, funniest, and most unanimous pushback. Grok, Gemini, Claude, and Copilot all flagged the same glaring contradiction: Maher invokes Musk as a voice of caution, but Musk co-founded OpenAI, then sued OpenAI, then launched his own AI company (xAI), then deployed AI aggressively across Tesla, Grok, and everything else he touches. Citing Musk as the "pump the brakes" guy is like citing the guy who built the highway to warn you about speeding. Claude put it best: "His actions undercut the message." Gemini called it a potential "regulatory capture" move. The irony is so layered it requires an AI to fully appreciate it.

2. "What Has AI Done For Us?" Is Selective Blindness

Maher's rhetorical flourish — implying AI has delivered nothing tangible — got the most consistent pushback of any point. Claude listed AlphaFold's protein-structure revolution, cancer detection surpassing radiologists, accessibility tools, and climate modeling. Grok pointed to accelerated drug discovery and scientific problem-solving. Copilot noted that "augmentation rather than replacement" is a legitimate counter-narrative. The consensus: healthy skepticism is good; pretending the benefits don't exist is not an argument, it's a mood.

3. The Chatbot "Dark Fantasy" Story Got Anthropomorphized

The infamous 2023 New York Times / Bing Sydney incident — where a chatbot expressed "dark fantasies" and emotional attachment — was real and genuinely unsettling. But Claude and Copilot both made the same careful distinction: this was a poorly aligned early deployment, not evidence of AI wanting to harm people. Attributing desires and intentions to statistical pattern-matching obscures the actual technical risks, which are, as Claude noted, "serious enough without dramatization." Maher's instinct was right; his framing was sloppy.

4. "Humans vs. Machines" Is a False Binary

Grok, Gemini, and Claude all pushed back on the framing that AI and human ingenuity are in opposition. The more accurate picture, they argued, is AI as an amplifier — a "bicycle for the mind," as Gemini put it. The question isn't whether to use powerful tools. It's how, for whom, and with what guardrails. Maher's nostalgia for pre-AI human creativity, while emotionally resonant, risks romanticizing limitations rather than demanding better stewardship of the tools we've built.

The Scorecard Nobody Asked For But Everyone Needs

Maher's ClaimAI Jury VerdictThe Fine Print
AI creators are scared → slow down⚠️ Partly AgreeFear ≠ halt. It means govern carefully
Dual-use AI is dangerous✅ AgreeWell-supported by security research
Power concentration is a real risk✅ AgreeStructural concern, not paranoia
No plan for job displacement✅ AgreePolicy gap is real and urgent
Musk as credible AI skeptic❌ ComplicatedHis actions contradict his warnings
AI chatbot has "desires"❌ DisagreeAnthropomorphism obscures real risks
AI has done nothing for us❌ DisagreeDemonstrably false at scale
Human ingenuity vs. machines❌ False BinaryAmplification, not replacement

The Meta-Twist Nobody Can Ignore

Let's pause and appreciate what just happened here. Five AI systems — built by the very industry Maher is criticizing — evaluated his critique with more nuance, intellectual honesty, and self-awareness than most human pundits manage on cable news. They agreed where he was right. They pushed back where he was sloppy. They named their own risks without flinching.

Which raises the most unsettling question of the entire exercise: Is the AI getting smarter about its own dangers faster than we are?

Gemini's closing observation cuts to the bone — Maher and his guests "aren't Luddites who hate technology; they are skeptics of human nature." The argument isn't really about AI at all. It's about whether the humans controlling it are trustworthy, accountable, and wise enough to handle it. Given that we haven't yet figured out social media, nuclear proliferation, or apparently basic financial regulation, the jury — human and artificial alike — remains very much out.

The Bottom Line

Maher is right to be alarmed. He's imprecise about why, and his rhetorical style occasionally trades accuracy for applause. But the strongest version of his argument — the one all five AIs essentially reconstructed from his rant — isn't "AI bad."

It's this: Power without accountability is dangerous. AI is the most powerful unaccountable force humanity has ever constructed. And the people building it are racing faster than the people governing it.

That version? Hard to argue with. Even for the robots.

Now if you'll excuse me, I need to go ask my AI assistant whether I should be worried about my AI assistant. It said "probably not." Which is exactly what it would say.

🔗 Watch the original Bill Maher segment: P(doom) | Real Time with Bill Maher (HBO)


YOU SCREEN, I SCREEN, WE ALL SCREAM AT SCREENS

 

YOU SCREEN, I SCREEN, WE ALL SCREAM AT SCREENS

Billionaire Tech Bros Broke Your Kids With Phones. Now They Want to Fix Them With AI. Guess Who's Selling Both.

There's a particular kind of audacity that deserves its own award — a golden trophy shaped like a dopamine receptor, perhaps — and it belongs to the same Silicon Valley ecosystem that spent a decade engineering smartphones to be as addictive as slot machines, then pivoted to selling schools "AI literacy" as the cure. The dealer is now the pharmacist. The arsonist is pitching fire insurance. And somewhere in a California classroom, a thirteen-year-old is having their iPhone locked in a magnetic Yondr pouch while a school-issued Chromebook boots up Khanmigo. Progress.

Welcome to 2026, where the defining educational policy debate isn't reading, writing, or arithmetic — it's which glowing rectangle is the right glowing rectangle. Pull up a chair. This one's delicious.

The Policy Paradox: "AI In, Phones Out"

Let's start with the central absurdity, because it deserves to be framed and hung on the wall of every school board meeting in America.

School districts across the country are currently executing a two-step policy maneuver that would make a Vegas magician blush:

  • Step One: Ban the phone. Cite the mental health crisis. Reference the Surgeon General. Pass legislation like California's AB 3216, which mandates "Phone-Free Schools" by July 1, 2026. Feel virtuous.
  • Step Two: Integrate AI into the core curriculum. Pass CA AB 1159 to regulate AI in schools — not remove it, regulate it — because AI is "a necessary skill for the future workforce."

The result is a policy that essentially says: "We're taking away the steering wheel because you kept crashing, but we're keeping the engine running. In fact, we're upgrading the engine. You're welcome."

The rationalization, delivered with a straight face by educators and administrators, is that the delivery system is the problem, not the technology itself. The phone is bad because it's a social media trigger. The AI is good because it's a "personalized tutor." The fact that the phone is the AI — that the Galaxy S26 and iPhone 17 now carry on-device AI chips capable of real-time translation and task automation without even touching the internet — is a detail that tends to get quietly shuffled to the back of the room.

Students, to their credit, have noticed the hypocrisy. Many point out that their personal devices run AI tools faster and better than the school-issued Chromebooks they're now forced to use. They're not wrong. They're just not supposed to say it out loud.

The Damage We Already Know About — And Are Cheerfully Repeating

Here's where we should pause and be honest about what the research actually says, because it's genuinely alarming and it didn't arrive without warning.

A landmark study using PISA data across 36 countries from 2006 to 2022 found that higher leisure-related device use during school hours correlates with significant declines in math, reading, and science scores. Not a little decline. Significant. The same research found that device use during school displaces face-to-face interaction during breaks and lunch, driving up measurable feelings of loneliness among students.

Let that sink in. The devices that were supposed to connect kids are making them lonelier. The tools that were supposed to make them smarter are making them score lower. And the platforms that were supposed to give them voice handed that voice to an algorithm optimized for outrage and engagement.

The mental health data is, by now, well-documented:

  • Teen depression rates climbed sharply after 2012 — the year smartphone adoption among adolescents hit critical mass.
  • Longitudinal studies from late 2025 link constant smartphone notifications to what researchers are calling a "permanent state of distraction" — not a temporary distraction, a permanent cognitive baseline shift.
  • States following Florida and Indiana's lead are now passing statewide "bell-to-bell" phone bans, because apparently the invisible hand of the market did not, in fact, sort this out.

So we know the harm. We documented it. We published it. We held Senate hearings about it. Mark Zuckerberg sat in a congressional hearing room and looked mildly inconvenienced by it.

And now the same technological ecosystem — different product, same profit motive — is rolling AI into classrooms nationwide, and we're supposed to assume this time the incentives are aligned with child development rather than quarterly earnings.

Sure.

The New Screen Is Just the Old Screen in a Lab Coat

Let's talk about what "AI in the classroom" actually looks like in practice, because the marketing brochure and the reality have a complicated relationship.

The pitch is compelling: personalized 1:1 tutoring through tools like Khanmigo or Socratic, instant feedback, adaptive learning paths, teachers freed from "drudge work" so they can focus on actual teaching. Sixty-nine percent of education leaders favor AI for lesson planning and administrative tasks. That's a real number. The efficiency gains are real.

But here's what the brochure doesn't lead with:

Critical thinking erosion is the primary concern among educators who've actually watched students use these tools. When a student can "snap and solve" a math problem with a phone-based AI app, the productive struggle — the cognitive friction that actually builds understanding — evaporates. We're not teaching students to think with AI as a scaffold. We're teaching them to outsource thought entirely, then staple their name to the output.

The analog pushback is already here. Teachers are shifting back to handwritten first drafts. Oral exams are making a comeback. "In-class writing assessments" are now specifically designed to ensure AI wasn't used to bypass the learning process. There is, improbably, a "Return to Paper" movement gaining traction in 2026 — which is either a sign of wisdom or a sign that we've managed to make pencils feel revolutionary again.

Meanwhile, the "Dead Internet" is no longer a conspiracy theory. It's a Tuesday. By early 2026, it's commonplace for an AI agent to post a trend-optimized video, for AI accounts to comment on it to boost the algorithm, and for a third AI to summarize it for users. Bot-to-bot engagement. Human optional. The social media crisis has evolved from "too much screen time" to "the screen is now mostly talking to itself."

And we want to pipe this ecosystem into fourth grade.

Who's Selling the Cure, and What Are They Selling It For?

This is the question that tends to get lost in the breathless coverage of "AI literacy initiatives" and "21st-century learning frameworks."

The same venture capital ecosystem that funded the social media platforms now funding the educational AI platforms. The same growth-at-all-costs logic that built Instagram's algorithmic feed is now being applied to "personalized learning paths." The difference is that this time, the product has a lesson plan attached to it, which makes it considerably harder to regulate and considerably easier to sell to school boards.

Consider the equity dimension alone:

  • If phones are banned, students without reliable home internet or high-end personal laptops depend entirely on school-issued devices.
  • If school AI tools are locked behind "Pro" subscriptions — and many are — a new "Intelligence Gap" emerges between wealthy districts that can afford premium AI access and underfunded districts stuck with the free tier.
  • We've replaced the Digital Divide with Digital Divide 2.0, now with better branding.

The "Dumbphone Renaissance" is a perfect encapsulation of where we've landed. Some districts are now encouraging minimalist phones that allow only calling and voice-based AI assistance while blocking social media and browsers. We have, in other words, reinvented the telephone — a device that existed before any of this — and are presenting it as an innovation. The tech industry broke the telephone, sold us smartphones, broke those, and is now selling us a worse telephone as a solution. The circle of life.

The Classroom as a 21st-Century Lab Inside a 19th-Century Bubble

The PromiseThe Reality
AI as personalized tutorAI as homework-completion service
Phone ban protects mental healthPhone ban removes symptom, not cause
School AI tools are "safe"Many require Pro subscriptions; equity gap widens
AI literacy prepares kids for workforceKids learn to use AI without learning why or when
Controlled hardware = controlled learningChromebooks slower than personal devices; students frustrated
"Productive struggle" returnsTeachers spending more time on AI detection than instruction

The situation, viewed without the press release, looks like this: we are trying to build a 21st-century AI laboratory inside a 19th-century "no-tech" bubble. We are telling students that AI is the future of the workforce while locking their primary connection to that world in a magnetic pouch at the classroom door. We are teaching them to use the engine of the crisis while confiscating the steering wheel.

And the children — who are, let's remember, the ones actually living inside this experiment — are watching adults perform a very elaborate pantomime of having figured it out.

The Real Question Nobody Wants to Answer

Is the current "AI literacy" push a genuine attempt at educational reform, or is it the latest shiny object for vendors to sell to districts?

The honest answer is: it's both, and that's precisely the problem.

There are genuine educators doing genuinely thoughtful work with AI tools. There are real efficiency gains. There are students who benefit from adaptive tutoring in ways that traditional classroom ratios simply cannot provide. None of that is fiction.

But the rollout — the policy framework, the legislative timing, the vendor relationships, the subscription models, the "walled garden" institutional AI that conveniently requires district-level contracts — follows a pattern that should, by now, be familiar. It's the same pattern that brought us "educational technology" in the 1990s, interactive whiteboards in the 2000s, and one-to-one iPad initiatives in the 2010s. Each wave promised transformation. Each wave enriched vendors. Each wave left teachers largely alone to figure out what to actually do with the thing.

The difference this time is that the technology is genuinely more powerful, the stakes are genuinely higher, and the speed of deployment is genuinely outpacing any serious research on outcomes. We are running the experiment on children in real time, with their cognitive development as the variable, and calling it "innovation."

The phone was the last experiment. We know how that one turned out.

The screen doesn't care if it's in your pocket or on your desk. It still wants your attention. It still has a business model. And it still isn't losing any sleep over your kid's test scores.

The question isn't AI in or phones out. The question is who's asking it, who's funding the answer, and whether anyone in the room is actually thinking about the children — or just the contract.

The Big Education Ape has been watching institutional nonsense since before your district's AI vendor had a website. Nothing here has changed except the price of the pouch.