Tuesday, September 16, 2014

Shanker Blog » The Superintendent Factor

Shanker Blog » The Superintendent Factor:



The Superintendent Factor

Posted by  on September 16, 2014

One of the more visible manifestations of what I have called “informal test-based accountability” — that is, how testing results play out in the media and public discourse — is the phenomenon of superintendents, particularly big city superintendents, making their reputations based on the results during their administrations.
In general, big city superintendents are expected to promise large testing increases, and their success or failure is to no small extent judged on whether those promises are fulfilled. Several superintendents almost seem to have built entire careers on a few (misinterpreted) points in proficiency rates or NAEP scale scores. This particular phenomenon, in my view, is rather curious. For one thing, any district leader will tell you that many of their core duties, such as improving administrative efficiency, communicating with parents and the community, strengthening districts’ financial situation, etc., might have little or no impact on short-term testing gains. In addition, even those policies that do have such an impact often take many years to show up in aggregate results.
In short, judging superintendents based largely on the testing results during their tenures seems misguided. A recentreport issued by the Brown Center at Brookings, and written by Matt Chingos, Grover Whitehurst and Katharine Lindquist, adds a little bit of empirical insight to this viewpoint.
The authors look at several outcomes, including superintendent tenure (usually quite short), and I would, as usual, encourage you to read the entire report. But one of the main analyses is what is called a variance decomposition. Variance decompositions, put simply, partition the variation in a given outcome (in this case, math test scores among fourth and fifth graders in North Carolina between 2001 and 2010) into portions that can be (cautiously) attributed to various factors. For example, one might examine how much variation is statistically “explained” by students, schools, districts and teachers.
Chingos et al. include a “level” in their decomposition that is virtually never part of these exercises – the superintendent in office. A summary of the results is presented in the graph below, which is taken directly from the report.
As you can see, most of the variation in testing outcomes (52 percent) is found between students (this is unmeasured, unobserved variation, including measurement error). An additional 38.8 percent is “explained” by the Shanker Blog » The Superintendent Factor: