Inside Higher Ed today offers a column by Daniel Chambliss of Hamilton College, taking issue with the Spellings commission report on higher education, and its analogies comparinf education to manufacturing:
By the conclusion of Secretary of Education Margaret Spellings’ recently-convened Test of Leadership Summit on Higher Education, I finally understood why her proposals are so … well, so ill-conceived. They rest on a faulty metaphor: the belief that education is essentially like manufacturing. High school students are “your raw material,” as Rhode Island Gov. Donald Carcieri told us. We need “more productive delivery models,” economies of scale, even something called “process redesign strategies.” Underlying everything is the belief that business does things right, higher education does things wrong, and a crisis is almost upon us, best symbolized by that coming tsunami of Chinese and Indian scientists we hear so much about. Time for higher ed to shape up and adopt the wisdom of business.
As regular readers know, this is very much in line with my own opinions, so you might expect that I’d be thrilled with the op-ed. Unfortunately, Chambliss goes a little too far in the other direction.
He chooses three ideas from the Spellings report to object to in detail:
- “If it isn’t measured, it isn’t happening.”
- Motivation is simple
- Clearly stated goals at the outset are a prerequisite for success.
He ridicules each in turn by saying that these may well apply to steel girders, but students are not steel girders. Which is all well and good, except that the propositions represented by those phrases are, for the most part, fairly unobjectionable.
Take, for example, his comment on the first point, which he summarizes as “Without formal assessment… nobody learns anything.”:
But for human beings, it’s obviously wrong, unmeasured good things happen all the time. Left alone, a 5-year old will explore, discover, and learn. So will a 20-year-old. They get up in the morning and do things, for at least a good part of the day, whether anyone watches and measures them or not. Many people read even if they aren’t forced to. The professor does nothing; the student learns anyway.
That’s a lovely thought, and I’m sure it’s true to some degree, but it completely misses the point. The idea, as I understand it, is not that we need to give lots of tests in order to force students to learn, it’s that we need to give tests so we can know what they’ve learned.
And, you know, I don’thave a problem with that. I’m a scientist, so I like data, and collecting data is generally a good thing. The mere fact that learning can happy outside a formal classroom context is no reason not to try to collect data about what’s going on.
Now, it’s also true that data collection can be overdone, particularly when talking about “high-stakes testing” and the like. It’s important that whatever means you choose to assess student progress be minimally burdensome for the people doing the ground-level work, and that you don’t try to fool yourself into thinking that you’re measuring finer distinctions than your tools allow, and there are solid arguments to be made against the Spellings commission and other educational reports on those grounds. That’s not what I get from this op-ed, though, and I’m not comfortable with what this does say.
There are similar problems with the objections raised to the other two points. The argument is strongest on the motivation issue, where he correctly notes that trying to push basic incentives too far can lead to perverse results, but again, the fact that dramatic actions can have unexpected consequences doesn’t mean that more moderate actions can’t work as planned. The final point is just bizarre, though– the need for clear goals seems to me to be completely obvious and reasonable. You can disagree about what the goals should be, or what tolerance you’ll allow for implementations that come up just short of those goals, but you need to have goals, and the clearer the better.
So, once again, I find myself in the squishy middle. I find most advocates of “assessment” based reform in education to be excessively technocratic, but Chambliss comes off as the prototypical fuzzy-headed humanist trying to deny the very existence of any objective reality, and I’m not comfortable with that, either. (The great irony here is that his byline lists him as “director of the Project for Assessment of Liberal Arts Education.”)
On some level, they each have the same problem with analogies: groups like the Spellings commission take the analogy between education and business farther than I think is productive, but Chambliss is taking it even farther in insisting that they’re trying to treat students like steel girders. Neither view is particularly accurate, and the analogy has become a real barrier to communication.