Latest News and Comment from Education

Thursday, May 23, 2019

Education Research Report: Flaws in High-Profile “Gold Standard” Study Used to Market Teach for America

Education Research Report: Flaws in High-Profile “Gold Standard” Study Used to Market Teach for America

Flaws in High-Profile “Gold Standard” Study Used to Market Teach for America


Andrew Brantlinger is a former public school math teacher who is now an associate professor at the University of Maryland’s Department of Teaching and Learning, Policy and Leadership. Earlier in his academic career, he worked with data concerning the New York City Teaching Fellows alternative certification program. So Brantlinger was intrigued when, six years ago, the federal Institute of Education (IES) Sciences published a report entitled, The effectiveness of secondary math teachers from Teach For America and the Teaching Fellows programs, finding that Teach for America corps members significantly out-performed other teachers at their high-poverty schools. This IES-funded high-profile study, which was authored by researchers at Mathematica, a non-partisan, research organization, is prominently featured in TFA promotional material.
TFA selects high-achieving college graduates and places them in these high-poverty schools after several weeks of preparation. Although the TFA corps members start off uncertified, the placement is followed by ongoing, on-the-job support, and many do eventually gain standard certification.
Brantlinger was eventually able to obtain the data used in the IES/Mathematica study and, along with co-author and University of Maryland doctoral candidate Matthew Griffin, he was able to perform a secondary analysis of the study data. 
In a Review Worth Sharing published today by the National Education Policy Center, Brantlinger and Griffin explain that the original analysis was flawed in three primary ways:
  • First-year Teach for America teachers were under-represented in the study (while second-year corps members were over-represented). This matters because teachers typically make considerable professional growth in their initial years on the job.
  • Poorly qualified teachers were over-represented in the comparison group. For example, nationwide, 80 percent of 8thgrade math teachers at high-poverty schools are fully certified. Yet just 40 percent of the comparison group were fully certified, while 58 percent of the TFA teachers in the study were fully certified. Keep in mind that alternative-certification programs, by definition, generally place teachers in schools before they are certified—making the situation studied here difficult to generalize. This may limit the study’s applicability to other schools and also bias the results in TFA’s favor.
  • TFA teachers were likely trained to teach to the exams used as study outcomes, since such an approach is part of the program. The study did not account for this likely alignment between the outcome measure and the TFA focus.
Despite assertions to the contrary on TFA’s website and promotional materials and by the authors of the Mathematica report, the effect size identified by the study was small—certainly small enough to be explained by these three flaws in data and methods.
The Mathematica study was designed as an experiment, with students randomly assigned to matched pairs of TFA and comparison teachers. Randomization studies are sometimes described as the “gold standard” for research because they reduce the odds that treatment and control groups are not comparable. However, as Brantlinger and Griffin’s analysis highlights, the on-the-ground reality of experimental studies does not CONTINUE READING: Education Research Report: Flaws in High-Profile “Gold Standard” Study Used to Market Teach for America