Science of the Times

A couple of good science stories in today’s New York Times:

First, an article on the Laser Interferometer Gravitational-Wave Observatory (LIGO). The current news hook, weirdly, appears to be a recent calculation of the expected magnitude of the signal resulting from the collision and merger of two black holes. Why this merits a long article, I’m not sure– I was under the impression that they already had a decent idea of the expected signal sizes– but it’s a decent article.

The other story will probably get more play, as it’s about the deathless topic of problems with peer review. As others have noted, peer review isn’t the problem. It’s working about as well as can be expected– all you can really do when refereeing a paper is catch really transparently idiotic problems with the writing and presentation of the data. If you could easily reproduce the calculations in a theory paper, it wouldn’t be worth publishing, and as for the duplication of experimental results, forget it, at least before publication.

In the longer term, of course, results do need to be reproducible, and that’s how the frauds we know of have been caught. That’s the really important part of the system of modern science– not the pre-publication review, but the post-publication replication of the results by other groups. Pre-publication review is important, but it’s never going to be foolproof, and we shouldn’t be too worried about its occasional failures.

7 comments

  1. The recent calculations are numerical relativiy results for the shape of the signal instead of just the strength – the actual waveform we expect to measure.

    Up to now, we had a good idea for the shape as the black holes approached, with a post-newtonian approximation, and we had a good idea about what the late-time behaviour would be, a perturbed black hole ringing down. But those analytical approximations both break down in the middle region where the two holes plunge together. Numerical simulations to model the whole process have only succeeded in the past nine months or so. (Ten coupled nonlinear partial differential equations, many details of guage/coordinate choice to worry about) It’s aweseomely cool.

    Knowing the details of the expected signal is pretty important because signal-to-noise is so low, the only way to pull out a detection is cross-correlate with a model waveform. It takes some pretty careful data analysis.

  2. There is one key difference between the article in the Times and our analysis at BioCurious on peer review — this article was written by an M.D. and focused on the problems associated with, for the most part, medical publications like the New England Journal of Medicine. In medicine, I think it actually isn’t feasible to reproduce a lot of the studies people have done, so there it is definitely a bigger problem than in physics or chemistry where, if you really need to prove yourself, you send another lab your samples for them to do the same measurements on. Getting a set of patients to do tests of new drugs is a slightly larger endeavour (and has larger ramifications for society at large) than measuring the conductivity in organic superconductors. That being said, my stance hasn’t changed, peer review is not to blame.

  3. It’s working about as well as can be expected– all you can really do when refereeing a paper is catch really transparently idiotic problems with the writing and presentation of the data.

    The NYT article has so many different issues jumbled together that it’s hard to know how to begin to respond to it. (And I’m amazed that after the response by the Vioxx researchers to the NEJM, we’re still talking about researchers omitting data and not about a journal ambushing its authors with spurious charges and not giving them a chance to respond.)

    But what I find fascinating about cases like the Hwang stem cell paper, and similar cases of fraud like the one in Francis Collins’ lab, is that their flaws really were “transparently idiotic”, at least in hindsight. It’s amazing how difficult to pick out such ludicrously obvious fraud when you’re not actively looking for fraud.

  4. Actually, the article that talks about the numerical relativity has something wrong. LIGO can detect high-frequency gravitational waves, but the kind of black hole mergers that the Goddard group modeled will emit low-frequency gravitational waves, which cannot be detected in the LIGO band. The black hole mergers will be a target for the Laser Interferometer Space Antenna (LISA), not LIGO.

  5. Oh, and LIGO not shaped like a big V. It’s shaped like an L. It has to be, in order to detect gravitational waves.

    [end nitpicking]

  6. LIGO is sensitive to approximately solar-mass black hole mergers – they’re actually one of the strongest sources in the LIGO frequencies, though they sweep through pretty quick on their way to merger. The numerical stuff I think can be scaled to arbitrary mass – it’s generally all done in units of some mass scale, with G=c=1.

  7. Thanks for the clarifications on the LIGO calculations. I couldn’t quite figure out what the deal was from the Times article.

Comments are closed.