Student Grades, Test Scores, and Rankings
by teacherken
Thu Nov 04, 2010 at 02:32:07 AM PDT
Some want to tie teacher evaluation to student performance on external tests. They may advocate a value-added methodology, which in theory should allow us to rank teachers by how much their students improve. While there are methodological issues about whether we can truly isolate what the teachers have actually contributed to the student performance, I found myself asking, if the way some propose to evaluate teachers is by how much the students improve, why are we not similarly evaluating students? Why do we insist upon artificial levels of performance, determined by percentage scores and weights, as if in converting things to a 100 point number scale, we therefore communicate something meaningful about that student -- s/he performed at an A level, or got a 93 percent overall. Is that really meaningful? Who has done more, the student who begins at a very low performance and then achieves at what we would classify as a C level, or the student who begins with a high A and stays there?
(more)
- teacherken's diary :: ::
Here, I think of a class many moons ago. There were 27 students in a "Talented and Gifted" class, all 9th graders. 23 finished with final grades of A. Consider several students from that class whose names have been changed to protect their identity.
Natalie was early on getting 94s on my tests and written assignments when no one else was over a 90. I pulled her aside and told her that if she did not improve what she was doing, she would be