I spent an inordinate amount of time yesterday reading an economics paper, specifically the one about academic salaries and reputations mentioned on the Freakonomics blog. There’s a pdf available from that post, if you’d like to read it for yourself.
The basic idea is that they looked at the publication records of several hundred full professors of economics, and publicly available salary data for many of the same faculty, and tried to correlate those with the “reputation” of the professors in question. They used a couple of indirect means to assign each faculty member a “reputation,” mostly based on the academic reputation of their home institution, as determined from various ranking schemes.
They found that reputation was positively correlated with the total number of citations to a person’s papers, but negatively correlated with the total number of papers. That is, faculty who published papers that got lots of citations were generally more highly regarded than those with fewer citations, but faculty who published a greater number of papers were less highly regarded than faculty who published fewer papers. However, they found that salary was positively correlated with number of papers– that is, the more papers a given person published, the more money they made.
This seems odd at first glance, but isn’t terribly surprising if you think about it. Salaries are generally set by all-campus administrators or committees, and they need to judge a wide range of disciplines, so they naturally fall back to counting publications. Reputation is generally determined within one’s own field, by people who are in a better position to assess the relative quality of publications, and give more weight to a smaller number of high-impact publications.
So that wasn’t all that odd. What was weird to me was the way they dealt with the whole thing.
They spent a few pages toward the end of the paper discussing What It All Means, and trying to find ways to explain their results. And every single one of those explanations, as far as I could tell while reading quickly, assumed that there was a causal connection between an increased number of papers and a decreased reputation. The authors went through some kind of impressive contortions to try to find explanations of how publishing one extra paper would lower your standing in the academic community.
But all they’ve really shown is correlation, not causation. It seems much simpler, to me, to explain the whole thing if you assume that reputation is determined by some other factor, and not solely from publication record (probably something like publication record early in one’s career, well before the full professor level). Those who have a strong reputation early on in their careers do not feel pressured to publish lots of papers in an attempt to boost their standing, while those who find themselves with a relatively weak reputation feel more pressure to publish, in hopes of professional advancement.
This explanation would also be consistent with my anecdotal experience, due to my odd situation in graduate school. I did my graduate work at NIST, a national laboratory, in one of the very best groups in the field. Because it was a national lab, and thus the permanent staff did not have the pressure of tenure or the like (and also because the bureaucracy involved in publishing anything was annoying), they only published papers that were really important– if it wouldn’t stand a good chance of making Physical Review Letters, it probably wasn’t worth writing up. There were a bunch of things done in the group that could’ve been spun off into smaller publications, but it wasn’t worth the hassle to anybody there. Lots of other groups at places with less institutional prestige do generate those kinds of articles, though.
The thing is, the authors of the economic study don’t seem to consider any explanation of this type. Instead, they proceed from the assumption that any relationship they see is a causal one, which leads to all sorts of odd reasoning. Now, granted, that’s the model the set up with a bunch of basically pointless equations (If you’re just writing “R(x) = f(x),” without having an expression for what f(x) actually is, you’re wasting everybody’s time) at the beginning, but given how silly most of their attempts at a causal explanation are, it seems like this assumption ought to be revisited.
Now, admittedly, I did read this quickly, so it’s possible that I skipped over the part where they did the responsible thing and noted that correlation is not necessarily causation. But if they did, I didn’t see it.
This is the sort of thing that makes it difficult for me to take economics all that seriously. The discipline has a reputation of loving simple models and clinging to them well past the point where it’s clear that they don’t work, and this paper appears to fit right in with that.
I didn’t read the body of the paper, but I saw this statement in the abstract:
So yes, they are claiming that the correlation is a causation.
To be taken more seriously, economists need to use puppets. Works for me.
I think the fundamental error is assuming a mention in the Freakonomics blog is a signal for good quality economic research.
And economists worry about causation/correlation all the time.
Management is about process not product, hence Gresham’s law for currency or people: Ðи вода ÑÐ¾Ð»Ñ Ð¸Ð»Ð¸ Ñвежие, деÑÑмо плаваеÑ. Competent economists would fare better in compensation for their expertise in recouping values of labor – if their expertise meant anything real world.
Re: puzzled at #3
> economists worry about causation/correlation all the time.
I’m not sure exactly what you mean by “all the time”, but scroll up on this page for a counterexample.
So did they do any controlling for the relative merit of various journals? I skimmed the article and didn’t see any mention of it. It seems logical to assume that scientists (or other academics for that matter) choose to invest a much higher amount of time and effort into a topic/paper that might go into an extremely prestigious journal, whereas the same amount of time could be used to crank out two or three papers on less important topics that might have to be published in less notable venues. I’d think economists of all people would understand that time is a limited resource, to be invested wisely, just as anything else is.
Lots of economists worry about correlation and causation all the time, but I do think it’s one thing that has gotten a bit lost in the field of “behavior economics.”