Latest News and Comment from Education

Showing posts with label AI ARTIFICIAL INTELLIGENCE IN EDUCATION. Show all posts
Showing posts with label AI ARTIFICIAL INTELLIGENCE IN EDUCATION. Show all posts

Wednesday, April 7, 2021

CURMUDGUCATION: Can You Fool An AI Emotion Reader

CURMUDGUCATION: Can You Fool An AI Emotion Reader
Can You Fool An AI Emotion Reader


As we have seen numerous times, there are software packages out there that claim the ability to read our emotions. Folks are lined up around the block to use this stuff, but of course one of the applications is supposed to be reading student emotions and therefor better at "personalizing" a lesson. 

Does this sound as if the ed tech world is overpromising stuff that it can't actually deliver? Well, now you have a chance to find out. Some scientists have created a website where you can practice having your own face read by software.

The team involved says the point is to raise awareness. People are still stuck on all the huge problems with facial recognition, but meanwhile, we're being surrounded by software that doesn't just recognize your face (maybe) but also reads it (kind of). Here's the project lead, Dr. Alexa Haggerty, from the awesomely-named University of Cambridge Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk:

But Hagerty said many people were not aware how common emotion recognition systems were, noting they were employed in situations ranging from job hiring, to customer insight work, airport CONTINUE READING: CURMUDGUCATION: Can You Fool An AI Emotion Reader

Monday, March 1, 2021

Using Big Data, Artificial Intelligence and Algorithms to Guide Education Choice - Network For Public Education

Using Big Data, Artificial Intelligence and Algorithms to Guide Education Choice - Network For Public Education
Using Big Data, Artificial Intelligence and Algorithms to Guide Education Choice



The writing team at Accountabaloney has kept a watchful eye on Florida’s ongoing shenanigans, and their newest post is alarming. Florida’s legislature is considering SB48, a bill that would turn all of Florida’s voucher programs into Education Savings Accounts. ESAs are like super-vouchers, a grant of taxpayer money from the state that parents can spend on whatever education expenses they choose–not just private school tuition, but anything education-related.

The money is handled by a non-profit organization. In Florida’s case that’s Step Up For Students, and one of the mysteries of this kind of transition is how such a group would manage thousands of families choosing from thousands of education-flavored vendors. The answer, as reported on the blog, is scary:

In a recent podcast, Doug Tuthill outlined how Step Up for Students has created an e-commerce platform, that will collect data from its voucher recipients and use Artificial Intelligence and algorithms to guide them towards the “best educational options” for their children. Apparently, those “best educational options” will never be district managed public schools.

Algorithm-selected education. Massive data mining. All handled by non-transparent software. Turns out school choice is actually algorithm’s choice.

Follow this link to the full story.

Monday, February 22, 2021

CURMUDGUCATION: Big Brother Knows What's In Your Heart

CURMUDGUCATION: Big Brother Knows What's In Your Heart
Big Brother Knows What's In Your Heart




Well, this is creepy.

Before the pandemic, Ka Tim Chu, teacher and vice principal of Hong Kong's True Light College, looked at his students' faces to gauge how they were responding to classwork. Now, with most of his lessons online, technology is helping Chu to read the room. An AI-powered learning platform monitors his students' emotions as they study at home.

The software is called 4 Little Trees, and the CNN article only scratches the surface of how creepy it is. So let's work our way down through the levels of creepiness.

4 Little Trees is a product of Find Solution AI, a company founded in 2016. This product appears to be the heart and soul of their company. Though their "about us" mission statement is "FSAI consistent vision is to solve the difficulties that the society has been encountered with technology." They might want to look at their placement of "with technology" in that sentence. Anyway, on to 4 Little Trees.

It uses the computer webcam to track the movement of muscles on the student face to "assess emotions." With magical AI, which means it's a good time for everyone to remember that AI is some version of a pattern-seeking algorithm. AI doesn't grok emotions any more than it actually thinks-- in this case it compares the points it spots on the student's face and compares it to a library of samples. And as with all AI libraries of samples, this one has issues--mainly, racial ones. 4 Little Trees has been "trained" with a library of Chinese faces. The company's founder, Viola Lam, is aware "that more ethnically-mixed communities could be a bigger challenge for the software." 

But aren't emotions complicated? The sample image shows the software gets to choose from varying amounts of anger, disgust, fear, happiness, sadness, surprise and neutral. The company calls these CONTINUE READING: CURMUDGUCATION: Big Brother Knows What's In Your Heart

Friday, January 29, 2021

CURMUDGUCATION: Selling Roboscoring: How's That Going, Anyway?

CURMUDGUCATION: Selling Roboscoring: How's That Going, Anyway?
Selling Roboscoring: How's That Going, Anyway?



The quest continues--how to best market the notion of having student essays scored by software instead of actual humans. It's a big, bold dream, a dream of world in which test manufacturers don't have to hire pesky, expensive meat widgets and the fuzzy unscientific world of writing can be reduced to hard numbers--numbers that we know are objective and true because, hey, they came form a computer. The problem, as I have noted many times elsewhere, is that after all these years, the software doesn't actually work. 

But the dream doesn't die. Here's a paper from Mark D. Shermis (University of Houston--Clear Lake) and Susan Lottridge (AIR) presented at a National Council of Measurement in Education in Toronto, courtesy of AIR Assessment (one more company that deals in robo-scoring). The paper is two years old, but it's worth a look because it shows the reasoning (or lack thereof) used by the folks who just can't let go of the robograding dream. 

"Communicating to the Public About Machine Scoring: What Works, What Doesn't" is all about managing the PR when you implement roboscoring. Let's take a look.

Warming Up

First, let's lay out the objections that people raise, categorized by a 2003 paper as humanistic, defensive and construct.

The humanistic objection stipulates that writing is a unique human skill and cannot be evaluated CONTINUE READING: CURMUDGUCATION: Selling Roboscoring: How's That Going, Anyway?

Thursday, December 24, 2020

CURMUDGUCATION: AI, Language, and the Uncanny Valley

CURMUDGUCATION: AI, Language, and the Uncanny Valley
AI, Language, and the Uncanny Valley


We experience vertigo in the uncanny valley because we’ve spent hundreds of thousands of years fine-tuning our nervous systems to read and respond to the subtlest cues in real faces. We perceive when someone’s eyes squint into a smile, or how their face flushes from the cheeks to the forehead, and we also — at least subconsciously — perceive the absence of these organic barometers. Simulations make us feel like we’re engaged with the nonliving, and that’s creepy.

That's an excerpt from Douglass Rushkoff's bookTeam Human, talking about how the uncanny valley is our best defense. The uncanny valley is that special place where computer simulations, particularly of humans, come close-but-not-quite-close enough and therefor trigger an ick reaction (like the almost-humans in Polar Express or creepy Princess Leia in Rogue One). 

The quest for AI runs right through the uncanny valley, although sometimes the ick factor is less about uneasiness and more about cars that don't drive themselves where you want them to. The gap between what AI promises and what it can deliver is at least as large as an uncanny valley, though companies like Google are now trying to build a fluffy PR bridge over it (hence Google's directive that researchers "strike a positive tone" in their write-ups).

Since summer, journalists have been gushing glowingly over GPT-3, the newest level of AI powered language simulation (the New York Times has now gushed twice in six months). It was the late seventies when I heard a professor explain that the search for decent language synthesizing software and artificial intelligence were inexorably linked, and that seems to still be true. 

It's important to understand what AI, or to call it by its current true name, machine learning, CONTINUE READING: CURMUDGUCATION: AI, Language, and the Uncanny Valley

Friday, October 30, 2020

CURMUDGUCATION: Psychic AI and Plagiarism Detection

CURMUDGUCATION: Psychic AI and Plagiarism Detection
Psychic AI and Plagiarism Detection


Artificial Intelligence is used to sell a lot of baloney. It would be bad enough it were used only to teach badly and provide poor assessments of student work, but AI is also being hawked as a means of rooting out plagiarism. For an example of this phenomenon at its worst, let's check in on a little webcast from Mark Boothe at Canvas Learning Management System. He's talking to Shouvik Paul at Copyleaks, a plagiarism checking company and partner of Canvas. I'm going to watch this so you don't have to--and you shouldn't. But you should remember the names just in case somebody at your place of work suggests actually using these products.

We start with a quick intro emphasizing Copyleaks' awesomeness. And then Boothe hands it over to Paul, the Chief Revenue Officer at Copyleaks, because when you want to talk about a product, you definitely want to talk to the revenue people at the company. Incidentally, sales and marketing has been Paul's entire career--no computer or education background anywhere in sight. But this is going to be a sales pitch for thirty-some minutes. Great.

First, Paul offers general background on Copyleaks. An AI company, building "very cool" stuff. That includes a product that does grading of essays on standardized tests. It takes humans hours, but their Ai can grade those papers "within seconds" within 1% accuracy of a human grader. Spoiler alert: no, it can't. They have offices around the world. 

So they were working on ed tech, and "as we all know" everyone from universities through k-12 is using some kind of plagiarism detection (oh my lord-- does that mean there are first grade teachers out there running student paragraphs through turnitin?). Paul says they found that some of the technology out there was outdated, meaning that when you're out there in education dealing with students, "it's such a cat and mouse game--they're always looking for new ways to beat the system." So we're going to adopt a cynical premise about those awful students as a starting point. Great. 

"Let's face it. What's the first thing a student's going to do? They're going to youtube, and they're going to type in something like 'how to cheat plagiarism check' Right?" And he is showing us on CONTINUE READING: CURMUDGUCATION: Psychic AI and Plagiarism Detection

Thursday, October 1, 2020

Audrey Watters: Hack Education: Selling the Future of Ed-Tech (& Shaping Our Imaginations) | National Education Policy Center

Hack Education: Selling the Future of Ed-Tech (& Shaping Our Imaginations) | National Education Policy Center

Hack Education: Selling the Future of Ed-Tech (& Shaping Our Imaginations)




 I have volunteered to be a guest speaker in classes this Fall. It's really the least I can do to help teachers and students through another tough term. I spoke briefly tonight in Anna Smith's class on critical approaches to education technology (before a really excellent discussion with her students). I should note that I talked through my copy of  The Kids' Whole Future Catalog rather than, as this transcript suggests, using slides. Sorry, that means you don't get to see all the pictures...
Thank you very much for inviting me here today. (And thank you for offering a class on critical perspectives on education and technology!)
In the last few classes I've visited, I've talked a lot about surveillance technologies and ed-tech. I think it's one of the most important and most horrifying trends in ed-tech — one that extends beyond test-proctoring software, even though, since the pandemic and the move online, test-proctoring software has been the focus of a lot of discussions. Even though test-proctoring companies like to sell themselves as providing an exciting, new, and necessary technology, this software has a long history that's deeply intertwined with pedagogical practices and beliefs about students' dishonesty. In these class talks, I've wanted to sound the alarm about what I consider to be an invasive and extractive and harmful technology but I've also wanted to discuss the beliefs and practices — and the history of those beliefs and practices — that might prompt someone to compel their students to use this technology in the first place. If nothing else, I've wanted to encourage students to ask better questions about the promises that technology companies make. Not just "can the tech fulfill these promises?", but "why would we want them to?"
In my work, I write a lot about the "ed-tech imaginary" — that is, the ways in which our beliefs in ed-tech's promises and capabilities tend to be governed as much by fantasy as by science or CONTINUE READING: Hack Education: Selling the Future of Ed-Tech (& Shaping Our Imaginations) | National Education Policy Center

Tuesday, September 29, 2020

CURMUDGUCATION: AI: Still Not Ready for Prime Time

CURMUDGUCATION: AI: Still Not Ready for Prime Time

AI: Still Not Ready for Prime Time



You may recall that Betsy DeVos sued to say, often, that education should be like hailing a Uber (by which she presumably didn't intend to say "available to only a small portion of the population at large). You may also recall that when the awesomeness of Artificial Intelligence is brought up, sometimes in conjunction with how great an AI computer would be at educating children.

Yes, this much salt
Well, here comes reminder #4,756,339 that this kind of talk should be taken with an acre of salt. This time it's an article in The Information by Amir Efrati, and it starts out like this:

After five years and an investment of around $2.5 billion, Uber’s effort to build a self-driving car has produced this: a car that can’t drive more than half a mile without encountering a problem.



We're talking $2.5 billion-with-a-B dollars spent with nothing usable to show for it. Unfortunate for something that has been deemed for Uber as "key to its path to profitability." Meanwhile, corporations gotta corporate-- a "self-driving" Uber killed a pedestrian in Temp, Arizona back in 2018, and the court has just ruled that while Uber itself is off the hook, the "safety driver" will be charged with negligent homicide. She mad the not-very-bright assumption that the car could do what its backers said it could do.

Meanwhile, Microsoft has absorbed partnered with OpenAI, the folks whose GPT-3 language emulator program is giving everyone except actual English speakers chills of excitement. Not CONTINUE READING: CURMUDGUCATION: AI: Still Not Ready for Prime Time

Tuesday, September 15, 2020

When Algorithms Give Real Students Imaginary Grades (Meredith Broussard) | Larry Cuban on School Reform and Classroom Practice

When Algorithms Give Real Students Imaginary Grades (Meredith Broussard) | Larry Cuban on School Reform and Classroom Practice

When Algorithms Give Real Students Imaginary Grades (Meredith Broussard)



Meredith Broussard (@merbroussard) is a data journalism professor at New York University and the author of “Artificial Unintelligence: How Computers Misunderstand the World.” She is working on a book about race and technology.” This op-ed piece appeared in the New York Times on September 9, 2020.
Isabel Castañeda’s first words were in Spanish. She spends every summer with relatives in Mexico. She speaks Spanish with her family at home. When her school, Westminster High in Colorado, closed for the pandemic in March, her Spanish literature class had just finished analyzing an entire novel in translation, Albert Camus’s “The Plague.”She got a 5 out of 5 on her Advanced Placement Spanish exam last year, following two straight years of A+ grades in Spanish class.
And yet, she failed her International Baccalaureate Spanish exam this year.
When she got her final results, Ms. Castañeda was shocked. “Everybody believed that I was going to score very high,” she told me. “Then, the scores came back and I didn’t even score a passing grade. I scored well below passing.”
How did this happen? An algorithm assigned a grade to Ms. Castañeda and 160,000 other students. The International Baccalaureate — a global program that awards a prestigious diploma to students in addition to the one they receive from their high schools — canceled its usual in-person final exams because of the pandemic. Instead, it used an algorithm to “predict” students’ grades, based on CONTINUE READING: When Algorithms Give Real Students Imaginary Grades (Meredith Broussard) | Larry Cuban on School Reform and Classroom Practice

Wednesday, September 9, 2020

Code Acts in Education: The Social Life of Artificial Intelligence in Education | National Education Policy Center

Code Acts in Education: The Social Life of Artificial Intelligence in Education | National Education Policy Center

Code Acts in Education: The Social Life of Artificial Intelligence in Education




Artificial Intelligence (AI) has become the subject of both hype and horror in education. During the 2020 Covid-19 pandemic, AI in education (AIed) attracted serious investor interest, market speculation, and enthusiastic technofuturist predictions. At the same time,  algorithms and statistical models were implicated in several major controversies over predictive grading based on historical performance data, raising serious questions about privileging data-driven assessment over teacher judgment. 
In the new special issue AI in education: Critical perspectives and alternative futures published in Learning, Media and Technology, Rebecca Eynon and I pulled together a collection of cutting edge social scientific analyses of AIed. The purpose was to add alternative analytical perspectives to studies of AIed benefits, and to challenge commercial assertions that AIed will solve complex educational problems while accruing profitable advantage for companies and investors.  
Like AI in general, AIed is social and political. It has its own long history and a complex present ‘social life’, and it is being developed in the pursuit of future visions of education. AIed has emerged in its current form from decades of prior research and development, from technological innovation, from funding practices, and from policy preoccupations with using educational data for various forms of performance measurement and prediction. Far from being merely a future vision, AIed is already actively intervening in education systems — in schools, universities, policy spaces and home learning settings — with effects that are only now coming into view. 
Yet the growth in critical studies of AI in other sectors (such as labour automation, healthcare CONTINUE READING: Code Acts in Education: The Social Life of Artificial Intelligence in Education | National Education Policy Center