A history prof from Catholic University named Jerry Z. Muller is flogging a new book titled The Tyranny of Metrics, most recently via an interview at Inside Higher Ed (there’s also a version at the Chronicle of Higher Education, but it’s paywalled, so screw them). This is being hailed in many parts of my social-media universe as being a righteous takedown of the mania for “assessment” in all things.
And, you know, I have some sympathy for this position. A lot of the push for greater “assessment” of everything in academia is pretty useless, just resulting in a lot of silly knees-bent running about. All too often the people calling for more “assessment” of things don’t have any coherent idea of something to do with the quantitative data they generate, they just want to be seen to be generating quantitative data because that’s a Thing that accrediting bodies have decided is important for colleges and universities to be seen to be doing. As someone from a quantitative science, that kind of pointless number-accumulation drives me nuts (and I’ve said so frequently enough that colleagues sigh heavily when the subject comes up in a meeting I’m at, even before I raise my hand to speak…).
That said, I’m not quite ready to sign on with Muller and his anti-metrics campaign, largely because of comments like this (from the IHE interview):
The key components of metric fixation are the belief that it is possible and desirable to replace judgment, acquired by personal experience and talent, with numerical indicators of comparative performance based upon standardized data (metrics)
At a very superficial level, that sounds great, and is very flattering to the faculty ego. We have spent years acquiring subject-matter expertise, after all, and are surely qualified to judge students and others.
On another level, though, this creeps me out. The problem is that an exclusive reliance on “judgment acquired by personal experience and talent” unchecked by reference to standardized data is a wonderful way for decision-making processes to become hopelessly corrupted by individual biases. Many of the most pernicious features of academia have gotten there through the operation of biased personal judgement, stretching back decades, and a lot of efforts to improve academic culture are at their core efforts to unwind those years of biased judgments.
The inclusion of “talent” in that especially gives me pause, because it’s uncomfortably close to the idea that some people are special and succeed for that reason. This is one of the most harmful ideas in all of academia, the source of endless social-media angst, and I’m not at all comfortable with the idea of enshrining it as one of the criteria for who gets to exercise judgment.
This is not to say that any and all quantitative metrics are automatically superior to the judgment of a professional– many of the easy numerical measures we have are, as Muller correctly notes, basically garbage. At the same time, though, we know that they’re garbage because they’re quantitative. People objecting to the use of student course evaluations as a metric for faculty performance rely heavily on the argument that they’re biased against faculty from underrepresented groups. We know because that bias is something that can be quantified, and has in numerous studies.
As crazy as it is to claim that quantitative metrics are inherently objective because they’re quantitative, it’s even crazier to say that relying on individual judgment is the fix for that. And yet, that happens an awful lot when this subject comes up. On a few occasions, I’ve heard this pointed out, and the counter-argument basically amounts to “It’s OK because unlike the quantitative systems we object to, we have the right biases…” I don’t find this to be a great testament to the quality of judgment being employed.
So, on the topic of metrics and assessment and how they’re debated in academia, I find myself deeply conflicted. I’m happy to agree with the claim that many of the “assessment” exercises we currently do are useless, and many of the metrics used in higher education are bad. All too often, though, the anti-metric argument slides directly from “We’re doing a bad job of quantifying this particular thing” to “We should stop trying to quantify anything,” and that’s not something I can sign on to.