Monday, February 25, 2013

UPDATE: Rating Ed Schools by Graduate’s Value-Added + What does the New York City Charter School Study from CREDO really tell us? | School Finance 101

What does the New York City Charter School Study from CREDO really tell us? | School Finance 101:



Revisiting the Foolish Endeavor of Rating Ed Schools by Graduate’s Value-Added

Knowing that I’ve been writing a fair amount about various methods for attributing student achievement to their teachers, several colleagues forwarded to me the recently released standards of the Council For the Accreditation of Educator Preparation, or CAEP. Specifically, several colleagues pointed me toward Standard 4.1 Impact on Student Learning:
4.1.The provider documents, using value-added measures where available, other state-supported P-12 impact measures, and any other measures constructed by the provider, that program completers contribute to an expected level of P-12 student growth.
http://caepnet.org/commission/standards/standard4/
Now, it’s one thing when relatively under-informed pundits, think tankers, politicians and their policy advisers pitch a misguided use of statistical information for immediate policy adoption. It’s yet another when professional 


What does the New York City Charter School Study from CREDO really tell us?

With the usual fanfare, we were all blessed last week with yet another study seeking to inform us all that charteryness in-and-of-itself is preferential over traditional public schooling – especially in NYC! In yet another template-based pissing match (charter vs. district) design study, the Stanford Center for Research on Educational Outcomes provided us with aggregate comparisons of the estimated academic growth of a two groups of students – one that attended NYC charter schools and one that attended NYC district schools. The students were “matched” on the basis of a relatively crude set of available data.
As I’ve explained previously in discussing the CREDO New Jersey report, the CREDO authors essentially make do with the available data. It’s what they’ve got. They are trying to do the most reasonable quick-and-dirty comparison, and the data available aren’t always as precise as we might wish them to be. But, this is also not to say that supposed Gold Standard “lottery-based” studies are all that. The point is that doing policy research in