{"id":5021,"date":"2010-09-01T11:38:34","date_gmt":"2010-09-01T11:38:34","guid":{"rendered":"http:\/\/scienceblogs.com\/principles\/2010\/09\/01\/teacher-evaluation-and-test-sc\/"},"modified":"2010-09-01T11:38:34","modified_gmt":"2010-09-01T11:38:34","slug":"teacher-evaluation-and-test-sc","status":"publish","type":"post","link":"http:\/\/chadorzel.com\/principles\/2010\/09\/01\/teacher-evaluation-and-test-sc\/","title":{"rendered":"Teacher Evaluation and Test Scores, aleph-nought in a series"},"content":{"rendered":"<p>There&#8217;s been a lot of energy expended blogging and writing about the <a href=\"http:\/\/www.latimes.com\/news\/local\/teachers-investigation\/\">LA Times&#8217;s investigation of teacher performance<\/a> in Los Angeles, using &#8220;Value Added Modeling,&#8221; which basically looks at how much a student&#8217;s scores improved during a year with a given teacher. <a href=\"http:\/\/www.slate.com\/id\/2265657\/?from=rss\">Slate rounds up a lot of reactions<\/a>, in a slightly snarky form, and <a href=\"http:\/\/motherjones.com\/kevin-drum\/2010\/08\/what-makes-great-teacher-great\">Kevin Drum has some reactions of his own<\/a>, along with links to <a href=\"http:\/\/www.quickanded.com\/2010\/08\/monday-morning-thoughts-on-l-a.html\">two<\/a> <a href=\"http:\/\/www.quickanded.com\/2010\/08\/great-teaching-blind-spot.html\">posts<\/a> from Kevin Carey, who blogs about this stuff regularly. Finally, Crooked Timber has a <a href=\"http:\/\/crookedtimber.org\/2010\/08\/30\/evaluating-teachers-using-test-scores\/\">post about a recent study showing that value-added models aren&#8217;t that great<\/a> (as CT is one of the few political blogs whose comments aren&#8217;t a complete sewer, it&#8217;s worth reading the ensuing discussion as well).<\/p>\n<p>Given all that, there&#8217;s not a whole lot left to say, but since I have strong opinions on the subject, I feel like I ought to say something. First and foremost, I really like Kevin Drum&#8217;s summary of the summary of the problem:<\/p>\n<blockquote>\n<p>But the problem with teachers is that assessing their performance isn&#8217;t just hard, it&#8217;s even harder  than any of those other professions. Product managers interact closely with a huge number of people who can all provide input about how good they are. CEOs have to produce sales and earnings. Magazine editors and bloggers need readers.<\/p>\n<p>But teachers, by definition, work alone in a classroom, and they&#8217;re usually observed only briefly and by one person. And their output &#8212; well-educated students &#8212; is almost impossible to measure. If I had to invent a profession where performance would be hard to measure with any accuracy or reliability, it would end up looking a lot like teaching.<\/p>\n<\/blockquote>\n<p>This is basically what I&#8217;ve said dozens of times before. Evaluating teachers is really difficult, and the report linked by Crooked Timber gives one really nice demonstration of just how bad even the value-added method (described by Kevin Carey as &#8220;the worst form of teacher evaluation but it&#8217;s better than everything else&#8221;) can be:<\/p>\n<blockquote>\n<p>A study designed to test this question used VAM methods to assign effects to teachers after controlling for other factors, but applied the model backwards to see if credible results were obtained. Surprisingly, it found that students&#8217; fifth grade teachers were good predictors of their fourth grade test scores. Inasmuch as a student&#8217;s later fifth grade teacher cannot possibly have influenced that student&#8217;s fourth grade performance, this curious result can only mean that VAM results are based on factors other than teachers&#8217; actual effectiveness.<\/p>\n<\/blockquote>\n<p>This is a major, major problem for any attempt to use this as an evaluation scheme.<\/p>\n<p>That said, I think discussion of and research into these questions is ultimately a good thing.<\/p>\n<p><!--more--><\/p>\n<p>That doesn&#8217;t mean I really approve of the LATimes&#8217;s grand-standing, which seems to be more about making a splash and boosting readership than any sincere desire to get to the bottom of this issue. But if that&#8217;s what it takes to get public officials to start collecting the data you would need to really study this problem, then it&#8217;s probably to the good.<\/p>\n<p>There are severe problems with even VAM evaluations, which are subject to very large fluctuations:<\/p>\n<blockquote>\n<p>One study found that across five large urban districts, among teachers who were ranked in the top 20% of effectiveness in the first year, fewer than a third were in that top group the next year, and another third moved all the way down to the bottom 40%. Another found that teachers&#8217; effectiveness ratings in one year could only predict from 4% to 16% of the variation in such ratings in the following year. Thus, a teacher who appears to be very ineffective in one year might have a dramatically different result the following year. The same dramatic fluctuations were found for teachers ranked at the bottom in the first year of analysis.<\/p>\n<\/blockquote>\n<p>There might, however, be ways to tease something useful out of the data. Year-by-year fluctuations may be very large, but does a three-year rolling average, for example, give you more consistent results? Are there factors that haven&#8217;t been controlled for that might be taken into account in a new study?<\/p>\n<p>The research clearly seems to indicate that an annual evaluation based on test scores, even value-added test score, is next to useless. And the strong correlations between test scores and socioeconomic factors means that these should absolutely not be used for any kind of state-wide or national merit evaluations. But that doesn&#8217;t mean that there isn&#8217;t anything to be gained by studying the question, and collecting lots of data is a good place to start.<\/p>\n<p>I haven&#8217;t had time to go through the EPI report in detail (I had vain hopes of doing so, which is why this is two days later than all the other posts on the topic), but I did want to pull out one other tidbit that struck me as interesting:<\/p>\n<blockquote>\n<p>A second reason to be wary of evaluating teachers by their students&#8217; test scores is that so much of the promotion of such approaches is based on a faulty analogy&#8211;the notion that this is how the private sector evaluates professional employees. In truth, although payment for professional employees in the private sector is sometimes related to various aspects of their performance, the measurement of this performance almost never depends on narrow quantitative measures analogous to test scores in education.<br \/>\nRather, private-sector managers almost always evaluate their professional and lower-management employees based on qualitative reviews by supervisors; quantitative indicators are used sparingly and in tandem with other evidence. Management experts warn against significant use of quantitative measures for making salary or bonus decisions.<\/p>\n<\/blockquote>\n<p>There&#8217;s even a scholarly citation, to pp.93-96 of <a href=\"http:\/\/www.epi.org\/publications\/entry\/books_grading_education\/\">this book<\/a>. Throw that in with the fact that obvious incompetents somehow hang onto private-sector jobs far longer than many of the assertions made in favor of various teacher evaluation schemes would have you believe (insert your favorite bad customer service story here), as something to keep in mind the next time the subject comes up.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>There&#8217;s been a lot of energy expended blogging and writing about the LA Times&#8217;s investigation of teacher performance in Los Angeles, using &#8220;Value Added Modeling,&#8221; which basically looks at how much a student&#8217;s scores improved during a year with a given teacher. Slate rounds up a lot of reactions, in a slightly snarky form, and&hellip; <a class=\"more-link\" href=\"http:\/\/chadorzel.com\/principles\/2010\/09\/01\/teacher-evaluation-and-test-sc\/\">Continue reading <span class=\"screen-reader-text\">Teacher Evaluation and Test Scores, aleph-nought in a series<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"1","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8,49,13,28,81,82,75],"tags":[306,123,87,363,521],"class_list":["post-5021","post","type-post","status-publish","format-standard","hentry","category-academia","category-class_issues","category-education","category-politics","category-economics_1","category-socialscience","category-society","tag-economics-2","tag-education-2","tag-politics-2","tag-social-science","tag-teacher-evaluation","entry"],"_links":{"self":[{"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/posts\/5021","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/comments?post=5021"}],"version-history":[{"count":0,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/posts\/5021\/revisions"}],"wp:attachment":[{"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/media?parent=5021"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/categories?post=5021"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/tags?post=5021"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}