Long Author Lists and Books Not Written

Back when I was in grad school, and paper copies of journals were delivered to the lab by a happy mailman riding a brontosaurus, I used to play a little game when the new copy of Physical Review Letters arrived: I would flip through the papers in the high energy and nuclear physics sections, and see if I could find one where the author list included at least one surname for every letter of the alphabet. There wasn’t one every week, but it wasn’t that hard (particularly with large numbers of physicists from China, where family names beginning with “X” are more common).

Every so often, somebody from outside physics stumbles on one of these, and the rest of the story follows with depressing predictability. such as, for example, this Times higher Education article:

When Gavin Fairbairn, professor of ethics and language at Leeds Metropolitan University, came across an article titled “The Sloan Digital Sky Survey: Technical summary” in The Astronomical Journal, in many ways it seemed “pretty ordinary”.

It was 5,230 words long, including the text of its 39 footnotes, and had 45 references.

Yet it was also an article with “more authors than any other publication I have ever come across in any of the areas in which I have worked”, Professor Fairbairn said.

A total of 144 authors were listed – equating to a mean contribution of 36.3 words each.

Professor Fairbairn added: “No doubt all those named contributed to the research. However, I find it difficult to understand how 144 individuals, however close their working relationship, could be involved in writing it.”

Honestly, my first reaction is that if 144 authors is the longest list he’s ever run across, he can’t have been trying all that hard. The one real exception to PRL’s “everything must fit in four pages” rule is for collaborations whose author lists are long enough to require an entire journal page of their own. And I’ve seen cases where the collaboration was so large that they just put “ACRONYM Collaboration” as the author, and expected people to go elsewhere to find the names written out.

This is nothing new– modern science is a highly collaborative enterprise, and there are well established policies regarding the way these papers are written, and who gets to be an author. This has been going on for decades, and nobody in the relevant fields has a problem with it.

Of course, it’s always new and troubling to scholars in irrelevant fields, for the noblest of reasons. The quote above is followed immediately by:

“I find it even more difficult to imagine how any assessment at all could be made of their contribution when it comes to awarding academic brownie points.”

Ah, yes. It all comes down to CV padding, and the burning ethical question of whether one person is getting more credit than they deserve for their scholarly activity.

Look, if we want to talk about the counting of papers for academic merit and promotion, by all means, we can do that. But let’s also talk about books, while we’re at it. Specifically, the fact that scientists don’t write them.

OK, in a strict sense, scientists do write books– many scientists in academia write textbooks, and some of us write popular audience books with a talking dog. But when an academic in the humanities says “book: in a professional context, they mean a scholarly monograph of the sort published by an academic press. And scientists don’t write those books, generally speaking, especially not early in their careers.

Scholarly monographs of this sort are the gold standard of scholarship in the humanities, though. If you want to get tenure at a good school, you need to have at least one book in press by the time you come up for review. Journal articles are nice, but a book is essential.

As a result, many academic merit systems are set up to reward scholarly books at a disproportionate level, at least from the point of view of a scientist. The only sure way to reach the highest tier of our previous merit system was to have a book published. It was kinda-sorta possible to get there with a cluster of journal articles in a single year, but that wasn’t a sure thing.

And it’s not a question of level of effort, either– the work required to get a couple of scientific papers into top journals is basically equivalent to the work required to get a book contract, as far as I can tell. The real work of my Ph.D. thesis was in the 30-odd pages of journal articles I published. The hundred-plus pages of the thesis itself were icing on the cake.

You’ll have a hard time convincing a lot of academics outside science of that, though. Two papers in Science or Nature can represent a couple of years worth of hard labor in the lab, but they’ll come to maybe ten pages of text and figures. That just doesn’t look as impressive as a hundred-page book from Directional State University Press.

So, yeah, scientists who work in large collaborations generate large numbers of papers with large numbers of co-authors, which makes it difficult to assess their productivity by mere paper-counting. They also have essentially no opportunity to generate books, which knocks out a whole category of academic productivity that is available to people like Prof. Fairbairn.

So, while I agree with the final comment attributed to him in the article (“Where competition for internal promotions cuts across disciplines, he said, there were dangers in using publication records as a major criterion.”), I don’t think it’s as one-sided an issue as the article implies. Measuring academic productivity is hard, and only really makes sense relative to the standards of a discipline (or sub-discipline– I don’t have any opportunity to write papers as part of a hundred-strong collaboration, either, despite being a physicist). Applying the standards of one discipline to work in another just doesn’t work, in any direction.

9 comments

  1. For academics in the humanities, writing is pretty much what they do. The people you’re quoting seem to assume that an equivalent number of researcher-hours go into every publication, and everyone must contribute to the actual writing. But for those monster physics papers, as you know, there’s a lot of researcher-hours going into planning, running, and analyzing all of the experiments. I can’t say for sure, but I’d bet that most of the authors on that long list were honestly spending a pretty significant amount of time on the research. The writing is almost an afterthought.

  2. How can anyone write an article like that without talking to some of the authors of many-author papers to find out how the process works, what authorship means, how credit is apportioned…?

  3. Despite agreeing with everything you write, I do think there is a real problem for large collaborations to distinguish and promote their best and brightest, especially the early career ones, for which this is often crucial. There are some mechanisms to do so, but all counted young and upcoming theorists (for example) have many more ways to distinguish themselves.

  4. “However, I find it difficult to understand how 144 individuals, however close their working relationship, could be involved in writing it.”

    And, speaking as someone until recently in the racket, thank G-d they weren’t. There’s nothing like a complete shift in tone from section to section to drive an editor bats.

  5. Moshe: I assume you refer to, among other things, the question of author position. Presumably you promote Brilliant Young Hotshot by having him be lead author of the paper, which will forever be known as B. Y. Hotshot et al.; the most common means of identifying the authority figure is to list him as either the second author or the last (last individual, if the ACRONYM Collaboration is listed as an author). Of course this frequently results in high-energy experimental particle physics faculty with only one or two first-author publications. Yes, it becomes a problem, more so than in other areas (including high-energy theoretical particle physics) where author lists are much shorter. In my subfield we almost never see three-digit author lists, but low-two-digits is common, and some instrument papers go into the high two digits.

    In 1993 a paper by E. Topol and 975 others won the Ig Nobel Prize for Literature. For the record, that was a biomedical research article, so the phenomenon is not limited to physics. I don’t know if that record has been superceded (I don’t have a personal PRL subscription).

  6. No, Eric Lund, in theoretical high-energy physics, as in experimental high-energy physics and mathematics, author lists are always in alphabetical order.

  7. onymous@6, if what you say is true, then how are there any young faculty in experimental high-energy particle physics whose last name does not start with A? As Otto points out, somebody has to be the main author of the paper, and if that person is not the first author, she is getting seriously ripped off. The concept of a faculty candidate, let alone an assistant professor trying for tenure, who has no first-author publications is mind-boggling. I can understand listing the cast of thousands part of the author list in alphabetical order, but in most areas of physics (including my subfield) the first author is presumed to be the main author, full stop. Any other algorithm leaves no reliable means for distinguishing Brilliant Young Hotshot from Mediocre Postdoc, which is the problem Moshe complained about. Invited talks don’t tell me whether I am dealing with B. Y. Hotshot or Eloquent Speaker (who may be either B. Y. Hotshot or M. Postdoc); I must assume (for the obvious reason that most teams want their spokespeople to be coherent) that, ceteris paribus, people with good spoken English skills are more likely than those who speak English only with difficulty to get the ACRONYM Collaboration’s invited speaker slots and places at the press conference panel.

  8. CVs in high energy experimental physics will typically have several sections of publications: papers I was lead author, papers I made a significant contribution, papers I was on the review committee, and everything else. Smart candidates won’t even list the “everything else”, they’ll give a count, maybe one or two highlights, and a SPIRES query to find the rest. It is a small enough community successful fakery is extremely unlikely. Letters of recommendation will also usually point out the significant contributions.

    I would like to see a requirement that the LHC experiment spokespersons recite the entire author list of their experiment at least once per annum (perhaps at the Christmas party). My name, incidentally, would not be on that list, as I’m currently classified as a computing professional, not physicist, due to some internal politics.

  9. and if that person is not the first author, she is getting seriously ripped off

    No. You’re extrapolating the social conventions of your subfield to others where they don’t apply. There are plenty of fields where the convention is alphabetical order, and they get along just fine. The only important thing is that if they’re up for promotion or a job before a committee including people from other fields, those people need to be made aware of the convention. Letters of recommendation and (in small enough communities) word of mouth make it clear what one’s contributions are.

Comments are closed.