New York State Of Mind
Last week, the results of New York’s new Common Core-aligned assessments were national news. For months, officials throughout the state, including New York City, have been preparing the public for the release of these data.
Their basic message was that the standards, and thus the tests based upon them, are more difficult, and they represent an attempt to truly gauge whether students are prepared for college and the labor market. The inevitable consequence of raising standards, officials have been explaining, is that fewer students will be “proficient” than in previous years (which was, of course, the case) – this does not mean that students are performing worse, only that they are being held to higher expectations, and that the skills and knowledge being assessed require a new, more expansive curriculum. Therefore, interpretation of the new results versus those in previous year must be extremely cautious, and educators, parents and the public should not jump to conclusions about what they mean.
For the most part, the main points of this public information campaign are correct. It would, however, be wonderful if similar caution were evident in the roll-out of testing results in past (and, more importantly, future) years.
The fact is that New York officials, city officials in particular, routinely interpret year-to-year changes in proficiency rates in an inappropriate manner. This includes: failure to account for the fact that proficiency rates are a distorted means of expressing test scores; that rates and scores often move in different directions; that changes in the sample of students who take the test and other forms of imprecision are often the primary cause of changes in rates/scores; and that increases in measured performance cannot be attributed to specific policies.
Granted, these types of errors are not at all limited to New York; they occur in virtually every state. Moreover, the new standards are a particularly drastic change, and the test results are in need of unusually careful interpretation vis-à-vis those from previous years. It’s one thing to fail to acknowledge that shifts in the composition of the test-taking sample can influence results. It is a rather more serious misinterpretation to proclaim that there’s been a