HUMANS OVER HARDWARE: THE "BIG EDUCATION APE" MANIFESTO FOR AI IN THE CLASSROOM
Or: How I Stopped Worrying About the Algorithm and Learned to Love the Teacher
Here's the uncomfortable truth that no ed-tech vendor's pitch deck will ever include: the most sophisticated learning technology ever invented is a caring adult in a room full of curious children. Everything else — the apps, the dashboards, the "adaptive learning platforms" with their cheerful loading screens — is just furniture. Expensive, data-hungry, venture-capital-upholstered furniture. So when Larry Ferlazzo, one of the sharpest educator-bloggers still standing in the digital rubble, posed two deceptively simple questions about AI in the classroom, I did what any self-respecting Big Education Ape would do: I asked the robots what they thought of themselves. The results were illuminating, occasionally hilarious, and — in the best possible way — deeply human.
The Two Questions That Started a Revolution (or at Least a Very Long Chat Thread)
Larry Ferlazzo's framework is elegant in its stubbornness. It demands a high bar before any AI tool gets a hall pass into your classroom:
For Students: Does this particular use of AI provide an obviously superior learning benefit not available through any other means, safeguard privacy, and avoid being cost prohibitive?
For Teachers: Does this particular use of AI provide a superior method of planning, data analysis, or resource development — not available through any other means using the same time or energy — while safeguarding privacy and avoiding prohibitive cost?
Notice what these questions don't ask. They don't ask: "Is this shiny?" They don't ask: "Did a billionaire's foundation fund a study saying it works?" They don't ask: "Will the superintendent be impressed at the board meeting?"
They ask: Does it actually serve the child? Does it protect them? Can we afford it without selling a kidney?
That's not a low bar. That's the correct bar. And the fact that most current AI classroom deployments would fail these questions on the first clause alone tells you everything you need to know about the state of ed-tech in 2026.
I Asked the AIs. Here's What the Robots Said About Themselves.
In a move of either brilliant meta-commentary or sheer chaotic energy, I polled the leading AI models — Gemini, ChatGPT, Copilot, and Grok — on Ferlazzo's questions. The irony of asking artificial intelligence whether artificial intelligence belongs in classrooms was not lost on anyone, least of all the AIs, who responded with the kind of earnest self-critique usually reserved for therapy sessions.
Gemini: The Litmus Test Enthusiast
Google's Gemini called the framework a "remarkably clean litmus test" and praised the phrase "not available through any other means" as the key innovation — a demand for genuine novelty rather than just speed. Gemini correctly identified the Triple Threat embedded in Ferlazzo's questions: linking pedagogy, privacy, and cost into a single non-negotiable "yes." It also flagged three missing dimensions worth noting:
- The Hallucination Problem — Can the teacher actually verify the AI's output without negating the time saved?
- The Hidden Environmental Cost — AI data centers consume staggering amounts of water and energy, and rely on underpaid "ghost workers" for data labeling. The invoice doesn't capture the full bill.
- Long-Term Skill Atrophy — A tool that provides a "superior" method today might quietly hollow out a student's capacity to think independently tomorrow.
Gemini's most pointed observation? The irony of Ferlazzo having to turn off blog comments because of AI spambots is the perfect illustration of why the first question matters. If the means creates more systemic garbage than it does superior learning, the answer is a firm no.
ChatGPT: The Thorough Committee Member
ChatGPT responded like a very diligent department chair who had clearly read all the memos. It praised the framework's clarity and added a checklist of additional considerations: equity and accessibility, long-term critical thinking impacts, teacher training, ethical bias audits, feedback mechanisms, and community collaboration. All valid. All important. All slightly reminiscent of a district policy document that gets filed and never opened again.
The core insight, buried in the bullet points: "Education should always prioritize student outcomes." Full stop. Everything else is commentary.
Copilot: The Policy Architect
Microsoft's Copilot went full governance mode, producing a refined, multi-clause decision rubric that would make a school board attorney weep with joy. It identified five critical gaps in the original questions: equity and access, transparency and explainability, curriculum integrity, teacher capacity and oversight, and data governance beyond basic privacy.
Copilot's sharpest addition: "Can the teacher clearly explain how the AI works, what it does, and what it does not do?" If the answer is no — if the tool is a black box that even the teacher can't describe to a parent — that's not a teaching aid. That's a liability.
Grok: The Honest Pessimist
Grok, to its credit, was the most candid about the fundamental tension: "AI in education isn't just a tool question; it's entangled with pedagogy, equity, human relationships, long-term skill development, and institutional realities."
Grok's most important warning: "The biggest risk isn't saying 'no' too often; it's saying 'yes' too casually and eroding the irreplaceable human core of teaching and learning."
That sentence should be laminated and posted in every ed-tech vendor's conference room.
The "Humans in the Driver's Seat" Manifesto: Four Pillars
What emerged from this long, winding, occasionally absurd conversation with the chatbots wasn't just a refined rubric. It was something closer to a declaration of educational independence — a manifesto for keeping humans in the driver's seat while the algorithm rides in the back, buckled up, and definitely not touching the radio.
Pillar 1: The People Power Moment 🗽
On April 27, 2026 — literally yesterday as this is written — NYC Schools Chancellor Kamar Samuels pulled the plug on a proposed AI-focused high school, hours before massive parent protests were scheduled to descend on Tweed Courthouse. The plan would have reorganized four beloved Upper West Side schools and opened a "Next Generation Technology High School" that parents described, with some accuracy, as a "supervised device-charging station with a diploma program."
The lesson is ancient and simple: "Move fast and break things" is a catastrophic philosophy when the things you're breaking are children.
Community pushback is not an obstacle to innovation. It is the innovation — the democratic immune system doing exactly what it was designed to do. When parents show up in numbers large enough to cancel a chancellor's press release, that's not resistance to progress. That's progress.
Pillar 2: Federal Guardrails vs. Local Control ⚖️
The White House's 2026 National AI Policy Framework gestures toward "unified national standards" — which sounds reasonable until you realize that "unified national standards" has historically meant "one set of rules that happens to benefit the companies that lobbied for them."
The counter-proposal, and the one worth fighting for, is a "Public Option" for AI: federally funded, locally governed, transparently accountable to parents and communities rather than shareholders and quarterly earnings calls.
Think public libraries. Think public broadcasting. Think: what if the infrastructure of knowledge wasn't a subscription service?
The aiEDU framework points toward this vision — equitable AI literacy for every student, regardless of zip code, without the data-harvesting business model attached. That's not anti-technology. That's pro-democracy.
"Innovation is just 'Data Mining' with a better PR firm if there isn't a teacher in the room."
Pillar 3: The Rubric Over the Bubble Sheet 📝
The standardized testing industrial complex gave us a generation of students who were exceptionally good at eliminating wrong answers on multiple-choice tests and somewhat less practiced at, say, forming an original thought. The current AI gold rush threatens to give us a generation of students who are exceptionally good at prompting original thoughts and somewhat less practiced at having them.
The antidote isn't banning AI. It's teacher-led rubrics that explicitly watch for:
- Emotional engagement and authentic voice
- Evidence of productive struggle (the kind that builds actual competence)
- Over-dependence signals — when the student can't begin without the prompt
Illinois SB 3735 is already moving toward codifying the right to a human-graded review for high-stakes assessments. That's not Luddism. That's pedagogy.
If a student can't think without the prompt, the tool is a crutch, not a ladder. And we are not in the business of manufacturing dependent learners for the convenience of a software subscription.
Pillar 4: Protecting the "Right to Be a Child" 🧒
California's AB 1159 is the legislative blueprint that every state should be studying. Its core innovation: legally prohibiting the use of student data to train corporate AI models unless it strictly benefits the educational institution. No more feeding children's essays, test scores, behavioral flags, and learning disabilities into the training corpus of a for-profit AI that will then sell "personalized learning" back to the district at a markup.
The NYC Department of Education's current privacy vetting process — the ERMA system — has already presided over breaches exposing the personal data of over one million students. The PowerSchool/Naviance class action settled for $17.25 million. These are not hypothetical risks. They are receipts.
The Parent Coalition for Student Privacy has been fighting this battle in the trenches while the ed-tech press was busy writing breathless profiles of "visionary" CEOs. They deserve a seat at every table where these decisions are being made — not a comment form with a May 8th deadline that nobody publicized.
The Guerrilla Toolbelt: Resources for the Resistance
The "Billionaire Gospel" of ed-tech privatization has money, lobbyists, and a very good PR operation. The public education community has something better: teachers who actually know what learning looks like, parents who show up, and a growing infrastructure of accountability tools.
| Purpose | Resource | Why It Matters |
|---|---|---|
| 🔍 Policy Language | TeachAI Guidance Toolkit | Best source for "Human-in-the-loop" policy frameworks |
| 🌐 Equity & Access | aiEDU – AI Education Project | The "Public Option" vision in action |
| 🔒 Privacy Protection | Parent Coalition for Student Privacy | Frontline defense against data mining |
| 📋 Legislative Tracking | FutureEd 2026 State AI Tracker | 53 bills across 25 states — know your battlefield |
| ✊ Organizing | MayDayStrong.org | No Work. No School. No Shopping. May 1, 2026. |
| 📞 Federal Pressure | Senate: (202) 224-3121 | Demand the DISCLOSE Act. Call. Today. |
The Bottom Line: Humans Are Not a Bug to Be Patched
Here is what every AI model — Gemini, ChatGPT, Copilot, Grok, and yes, even the one writing this sentence — ultimately agreed on when pressed: the human relationship at the center of education is not inefficiency to be optimized away. It is the entire point.
A teacher who notices that a quiet kid in the third row hasn't laughed in two weeks is performing an act of intelligence that no algorithm has ever replicated. A classroom where a student argues with a peer, gets it wrong, feels the sting of being wrong, and then figures out why — that is a learning environment that no adaptive platform has ever successfully bottled.
Larry Ferlazzo's two questions are, at their core, a demand for intentionality in a moment defined by its absence. They force the burden of proof back onto the software, where it belongs. They refuse the false choice between "embrace AI completely" and "ban it entirely." They say, simply: prove it helps, prove it's safe, prove it doesn't cost us more than we can afford — and if you can't clear all three bars, sit down.
That's not technophobia. That's teaching.
As we head into May Day 2026 — No Work, No School, No Shopping — remember that the same forces pushing AI-as-replacement-for-teachers are the same forces pushing vouchers-as-replacement-for-public-schools. The playbook is identical. The beneficiaries are identical. The antidote is also identical: organized, informed, loud, and present.
The Big Education Ape has been right all along. Stay fast. Stay cheap. Never let the algorithms outwork the humans.
Workers Over Billionaires. Humans Over Hardware. Teachers in the Driver's Seat.
📞 Call Congress: (202) 224-3121 | 🔗 MayDayStrong.org | 🔗 NEA.org | 🔗 Larry Ferlazzo's Blog
✊ May Day 2026 & Organizing
May Day Strong — Official Pledge & Action Hub Sign the pledge, find local events, and access organizing resources for May 1st, 2026. https://maydaystrong.org
