Experimentalists Aren’t Idiots: The Neutrino Saga Continues

In a lot of ways, the OPERA fast-neutrino business has been less a story about science than a story about the perils of the new media landscape. We went through another stage of this a day or two ago, with all sorts of people Twittering, resharing, and repeating in other ways a story that the whole thing has been explained as a relativistic effect due to the motion of GPS satellites. So, relativity itself has overthrown an attack on relativity. Huzzah, Einstein! Right?

Well, maybe. I’m not quite ready to call the story closed, though, for several reasons. First and foremost is the fact that the primary source I’ve seen cited touting this as the final word on fast neutrinos is a Physics Arxiv Blog, which is emphatically not a reliable source. Despite the name, it is not in any way affiliated with the arxiv itself, and while it is hosted by MIT’s house magazine, the author is not an MIT physicist, but a science writer.

Those issues are confusing to a lot of non-physicists, making it seem like a much more definitive statement than it actually is. Professional physicists and regular readers of the blog know, though, that it consistently shows a sort of New Scientist credulity regarding whatever exotic claim is being made by a new paper. I don’t trust the Arxiv Blog to correctly evaluate the technical merits of a preprint.

Secondly, none of the stories I’ve seen about this (and I admit, I haven’t read all of them) contain any actual journalism. By which I mean that none of them include any comments from anyone associated with OPERA asking whether they think the claim has any merit. Which means that even taking the Arxiv Blog out of the equation, what we have here is a single-author preprint from somebody in the Department of Artificial Intelligence at a Dutch university, claiming to have found an issue that was missed by a large collaboration of physicists. Which is possible, but not what I’d call a rock-solid case. Until I hear something from people associated with the original measurement acknowledging that this is a possible solution, I’m going to remain skeptical.

Which brings us to the final issue, which is a sociology-of-science sort of problem, which is that for this to be the explanation would require a whole bunch of people to be idiots. As Tom notes the error described here seems like the sort of thing that would need to be accounted for in GPS in the first place– the timing error involved is around 32 ns, which corresponds to a position error of around 10m, and GPS routinely does better than that. Tom’s an atomic clock physicist, and has some familiarity with this stuff, so I’m inclined to give his opinions a little more weight than random people from outside physics.

I think the author of the preprint is also confused about the way the experiment worked. That is, he seems to be assuming that all of the timing information comes from the GPS clocks, which are moving relative to the experiment, when in fact the actual event timing comes from clocks on the ground that are at rest with respect to the source and detector. GPS is used to synchronize the ground-based clocks, and to measure the distance between source and detector, but the clocks doing the actual time measurement are in the same moving frame as the experiment itself. But I could be misreading his argument.

(There’s still an issue here, of course, as Matt McIrivn reminded me on Google+, because it’s impossible to perfectly synchronize clocks in a rotating frame, but that doesn’t seem to be what this attempted explanation is about.)

Of course, assuming that the OPERA scientists are a pack of idiots isn’t limited to journalists and AI researchers– lots of theoretical physicists seem to think the same thing, which is kind of depressing. One of the usual suspects in this area actually suggested that the explanation might be that they had failed to account for the probabilistic decay of the muons into neutrinos in the beam, instead putting in a single definite decay time for all of them. This is, frankly, kind of insulting to experimentalists in general. It’d be a little like me saying “Well, I bet your calculation of the vacuum energy is off by 120 orders of magnitude because you forgot to carry the 2 when you added a couple of numbers.” Most of the other suggestions I’ve seen made have been better than that, but still far too quick to assume the worst of the OPERA collaboration.

So, look, the OPERA result is almost certainly going to turn out to be an error of some sort. It might even be some subtle relativistic correction that was left out. But please, try to start your search for an explanation with the assumption that the experimenters aren’t complete idiots. Whatever’s wrong is probably going to be kind of subtle– especially since MINOS saw a similar anomaly, though with worse statistics, back in 2007– and not something that will seem blindingly obvious once it’s discovered.

52 comments

  1. I have only skimmed the paper, and I am not a relativity expert, but my first thought when I heard of this was that surely satellite motion must be corrected for in GPS. The GPS system has a lot of relativity experts involved, and that is the sort of thing that they would have thought of, right.

    A few weeks ago I saw a paper arguing that their use of maximum likelihood fitting could give errors if they don’t account for certain effects. My first thought was that if they had done a Monte Carlo simulation (and don’t particle experimentalists always do Monte Carlo simulations?) they would have found and corrected for this effect.

    Truthfully, some of the dialogue around this sounds like creationists saying “I bet those biologists don’t even know what causes mutations, they just use them as a fudg!”Maria and other stuff like that.

  2. “New Scientist credulity”

    1.) ZING!

    2.) Is this truly credulity, or is it favoring entertainment value over scientific value?

  3. I am admittedly not an expert in the field, and I quickly came up with a dozen possible explanations which I all dismissed with “but they would have checked that”. None of them passed the “experimentalists aren’t idiots” test.

    I have a friend who is an expert in the field (his doctorate is about neutrinos and he’s an experimental physicist in a neutrino-based project currently) suspects that the error is somewhat stupid. He suspects that a confusion of sign convention occurred and in the analysis code something is being added instead of subtracted (or vice versa) yielding an expected delay 60ns off of the proper expected delay. But without access to the analysis code to examine, it’s hard to verify where specifically the error is, if that’s it.

    As a programmer who has had to track down bugs like that, I can believe it plausible.

  4. I think single author paper from artificial intelligence department at a Dutch university does not represent the attitude of most theoretical particle physicists. I would like to believe that the ArXiv blog does not represent the attitude of most science journalists, but I am less convinced. I truly don’t understand this phenomena of “ambulance chasing” – quickly sending out sensational stories that are almost certainly wrong, and will be proven wrong on a short time scale. Most likely the writers know that full well, you could almost feel the urgency of avoiding any journalistic standard in the rush of getting the story out before someone has concrete evidence it is wrong.

  5. There are some major problems with Dr. van Elburg’s argument.

    First and most important. Relativistic corrections are already taken account by the GPS system.

    Second, even they weren’t, over the time of flight which only lasts 2.4 milliseconds, the relativistic correction would be something of the order of 1.275 x 10^-12 secs, that is, much smaller than margin of error of the OPERA results and much smaller than the 64 nanoseconds predicted by van Elburg.

    And finally, he, as many others, forget that time dilatation was introduced as a way to explain the constancy of the speed of light and to make theory fit observations. One can’t do the reverse and use the
    time dilatation concept to make the observation fit the theory. It defeats, contradicts even, the purpose.

    I really enjoyed the clarity and objectivity of this article. It’s the best of the many I have read so far about the OPERA controversy.

    That said, I believe that the OPERA results will be confirmed. Of course, I’m more than a little partial since superluminal neutrinos fit perfectly with predictions I have made a long time ago.

    Good job,

    Daniel L. Burnstein
    http://www.quantumgeometrydynamics.com

  6. I think single author paper from artificial intelligence department at a Dutch university does not represent the attitude of most theoretical particle physicists.

    No, but there were plenty of suggestions on blogs and in the comments to blog posts that were almost always things requiring the experimenters to be idiots– failing to account for the curvature of the Earth, and that kind of thing.

    I would like to believe that the ArXiv blog does not represent the attitude of most science journalists, but I am less convinced. I truly don’t understand this phenomena of “ambulance chasing” – quickly sending out sensational stories that are almost certainly wrong, and will be proven wrong on a short time scale. Most likely the writers know that full well, you could almost feel the urgency of avoiding any journalistic standard in the rush of getting the story out before someone has concrete evidence it is wrong.

    I don’t think that’s a problem restricted to science journalism. I think it’s a problem with modern journalism, period. It’s one of the ways that a 24-hour news culture makes everything worse– they’re constantly looking for a splashy story, and don’t care too much if it evaporates later on. Political journalism is leading the race to the bottom, but everybody else gets dragged along.

  7. I know a few people trying to figure out the OPERA result, and specifically the issue of the use of GPS and clock synchronizing, and the precise shape of the earth does come into it apparently (not just the fact it is curved but the precise shape). These are subtle and interesting issues which is probably the main motivation here, because the result itself is most likely wrong and we might never know exactly how. I guess the general lesson is that standards of rigor on social networks are not all that high, should not come as a great shock, I don’t think.

    As for the journalism, I would think that science journalism is a niche subject, aimed at specific audience, and therefore standards could conceivably be a bit higher than say covering teenage pop stars or the political story of the day. Probably just wishful thinking on my part.

  8. Amen Chad!!! What is wonderful is the discussion is happening globally in real time for us “spectator’s” and enthusiast’s of what physicist’s do. We are getting a “psychological” view of science that is rather interesting and the instant “problem solved” by some journalists gives a rather interesting insight into journalism as well.

    Now if this turns out the be true, that a researcher in Artifical intelligence in Denmark who is neither an experimentalist, nor a theoretical physicist in physics itself, discovered the error, what does that say about the experimentalist’s and even more the theoretical group who globally who catch it? Apparently as it stands right now, 160 experimentalists at CERN and a global community of Theoretical physicists didn’t catch what a researcher in Artificial intelligence caught. That would be a much much larger and interesting story than the SLN’s as to why that would be true. Do I think he’s right, no not at all, and if he is right, well then we have a fundamental psychological breakdown somewhere in the world of physics that is rather disturbing and interesting at the same time.

  9. Amen Chad!!! As a spectator to this, if the explanation holds to be true,(by a AI researcher not directly involved in daily physics research) that would be even more interesting than the issue of SLN’s themselves. Apparently as the story unfolds 160 experimentalists struckout badly, and a global community of theoretical physic’s professorial working daily in the field of physics completely were one upped by someone outside their direct daily field of expertise.

    The story sort of takes on a kind of mythology of ” If they only had studied Maxwell better, such a simple error would have been discovered” as the story unfolds. Do I agree that this is possible? Yes. Is it likely, absolutely not. I trust the experimentalists to have not made such an error. But then again group think is a rather interesting topic as well.

  10. Nice article. Does anybody know if they can repeat the experiment with low-energy neutrinos like the ones emitted by the 1987 supernova? We know how fast those traveled relative to the light emitted by the supernova. If OPERA repeats their experiment with those kinds of neutrinos, and finds superluminal velocities, this would prove their experimental design has a flaw.

  11. When all is said and done, my bet is on a surveying error of some sort or another, rather than a neutrino physics error. I was guessing that it may be related to the difference between the lumpiness of actual geoid relative to the geometric reference geoid that GPS uses, but maybe it is relativity-related instead.

  12. I read over the rebuttal paper and it appears to be incompatible with what I read on the primary paper (I am not a physicist, btw), so I agree with this article. Namely, the original states that “common-view” was used to sync the earth clocks used to measure the neutrino events. The rebuttal states clearly, “the clocks in the OPERA experiment are orbiting the earth in GPS satellites”.

    A comment above I think calculated that the satellites go slow enough and are near enough to us that errors would be much smaller than what was calculated. This would agree with the likely 1ns max suggested error for the common-view method.

    Not having looked much further, I think the delay times for the instrumentation might be off, or, more specifically, there are hidden delays not accounted for ..OR, the syncing was done with a different set of delays as the actual experiments.

    If this sort of experiment has not been done to this resolution before, such high precision errors would not have showed up. When the group got the upgrade to newer GPS interface in 2008, it’s possible some assumption from the era of the earlier instrumentation design were not re-visited.

    One clue is that I notice the orig paper mentions FPGA delay values. That seems rather specific. Are they ignoring other digital logic delays.. example, some processing board logic or not-real time software system that manipulates those values? Maybe during syncing a different algorithm and hence time to produce results takes place vs during the experiment. There might even be variable values or some part of the system has not warmed up to operating temp and speed (or there are cache issues, etc).

    If the instrument delay during the experiment in CERN is shorter than used to sync there (the neutrinos “false start” the race relative to expectations at LNGS) or if the instrument delay during experiment measurement in LNGS is longer than during timing sync (we start the LNGS clock too slow relative to the expectations at CERN), we would get the bias in the direction of shorter computed travel time (and hence faster implied speed).

    Notice that most of these flaws are probably not with the theory of measurement or the physics. Whoever put together the hardware, to the extent it is software based and may have been changed after installation, may have not updated their side of the timing model properly (or the sw/hw was never properly analyzed.. at least not under the implied tighter post 2008 requirements). Were COTS components used and/or changed recently?

  13. “a sociology-of-science sort of problem”
    I applaud you for being one of few who understand that reactions to the neutrino measurements are an important social issue. With sadness I see how the usual suspects jump up to hype every rumor in support of their view while the criticism on that all this damages the credibility of science in general is silenced. Public distrust of science is growing, which is dangerous in a modern techno-world where decisions are democratically arrived at, and this neutrino biz is a good way to understand that science and science media are mainly responsible:
    http://www.science20.com/alpha_meme/refusal_neutrino_results_supports_global_warming_denial_predicted-83583

  14. The OPERA experiment is not the first one refuting special relativity. In 1887 the Michelson-Morley experiment UNEQUIVOCALLY refuted the assumption that the speed of light is independent of the speed of the light source (Einstein’s 1905 light postulate) and confirmed the antithesis, the equation c’=c+v given by Newton’s emission theory of light and showing how the speed of light varies with v, the speed of the source relative to the observer:

    http://www.pitt.edu/~jdnorton/papers/companion.doc
    John Norton: “These efforts were long misled by an exaggeration of the importance of one experiment, the Michelson-Morley experiment, even though Einstein later had trouble recalling if he even knew of the experiment prior to his 1905 paper. This one experiment, in isolation, has little force. Its null result happened to be fully compatible with Newton’s own emission theory of light. Located in the context of late 19th century electrodynamics when ether-based, wave theories of light predominated, however, it presented a serious problem that exercised the greatest theoretician of the day.”

    http://philsci-archive.pitt.edu/1743/2/Norton.pdf
    John Norton: “In addition to his work as editor of the Einstein papers in finding source material, Stachel assembled the many small clues that reveal Einstein’s serious consideration of an emission theory of light; and he gave us the crucial insight that Einstein regarded the Michelson-Morley experiment as evidence for the principle of relativity, whereas later writers almost universally use it as support for the light postulate of special relativity. Even today, this point needs emphasis. The Michelson-Morley experiment is fully compatible with an emission theory of light that CONTRADICTS THE LIGHT POSTULATE.”

    http://www.amazon.com/Relativity-Its-Roots-Banesh-Hoffmann/dp/0486406768
    “Relativity and Its Roots” By Banesh Hoffmann: “Moreover, if light consists of particles, as Einstein had suggested in his paper submitted just thirteen weeks before this one, the second principle seems absurd: A stone thrown from a speeding train can do far more damage than one thrown from a train at rest; the speed of the particle is not independent of the motion of the object emitting it. And if we take light to consist of particles and assume that these particles obey Newton’s laws, they will conform to Newtonian relativity and thus automatically account for the null result of the Michelson-Morley experiment without recourse to contracting lengths, local time, or Lorentz transformations. Yet, as we have seen, Einstein resisted the temptation to account for the null result in terms of particles of light and simple, familiar Newtonian ideas, and introduced as his second postulate something that was more or less obvious when thought of in terms of waves in an ether.”

    Pentcho Valev pvalev@yahoo.com

  15. Although I said above that I don’t think experimentalists are idiots, I still think that, when all is said and done, it will turn out that somebody’s data acquisition or analysis code has some silly error in it, or that one little component isn’t as well-calibrated as they thought it was. I used to be an experimentalist, and nowadays as a theorist I debug a certain amount of code, so I know how these things often work out. I suspect that the people at CERN have thought through most/all of the issues being raised by outsiders, but no outsider has the knowledge to ask “Hey, is that one cable loose?” or “Did you remember to pass the correct element of this array?” or whatever.

    Most issues of that sort can be caught either in calibrations or by passing simulated data to the analysis code, but if the effect is subtle it might give clean results in the sanity checks and then give a tiny discrepancy (say, one part in 10^5…) with the real data because of some coincidence with the noise.

    And no amount of outside suggestions will help them fix that sort of thing.

  16. It would have been much more interesting if the OPERA group had turned this into an open source project, making raw data, code, schematics — everything — available to the public.

  17. Past experience with anomalies in particle physics that aren’t real suggests we might never know exactly what the experimentalists did wrong. Experimentalists aren’t idiots, but once things get complicated enough, very subtle mistakes can definitely fly under the radar.

    The depressing thing about this, though, is that appears like a large fraction of phenomenologists are idiots: Inspire lists 88 papers now citing the OPERA paper (88 papers! in just a few weeks!), mostly on hep-ph, and most of them showing basic ignorance of their own field. This kind of bandwagon-following is the sign of an unhealthy field, I think. Thank god for Cohen and Glashow, at least.

  18. It would have been much more interesting if the OPERA group had turned this into an open source project, making raw data, code, schematics — everything — available to the public.

    This does seem like a perfect example of a problem that could benefit from the open approach. Particularly for the “bug in the code” failure mode.

    The depressing thing about this, though, is that appears like a large fraction of phenomenologists are idiots: Inspire lists 88 papers now citing the OPERA paper (88 papers! in just a few weeks!), mostly on hep-ph, and most of them showing basic ignorance of their own field.

    Depending on the direction of the ignorance (I don’t follow high energy that closely), this might just be a matter of people dusting off pre-existing toy models containing tachyons. I could easily imagine that a lot of people have made models that accidentally produced faster-than-light velocities for something, which were put aside for that reason but are easily recreated and used to bang out a quick preprint.

  19. Concerning the paper referenced at the beginning of this blog entry, I have a couple of comments…

    1. There is a factual error in paragraph 2 of this scienceblogs post, which I will correct here. First the offending paragraph from the scienceblogs post:

    “Well, maybe. I’m not quite ready to call the story closed, though, for several reasons. First and foremost is the fact that the primary source I’ve seen cited touting this as the final word on fast neutrinos is a Physics Arxiv Blog
    http://www.technologyreview.com/blog/arxiv/27260/,
    which is emphatically not a reliable source. Despite the name, it is not in any way affiliated with the arxiv itself, and while it is hosted by MIT’s house magazine, the author is not an MIT physicist, but a science writer.”

    The above paragraph is partially correct. The Physics Arxiv Blog is, in fact, an MIT review site, and not the original arxiv. Had the blogger read to the bottom of the article cited by him, he would have found a link to the original article that IS found in the official Cornell arxiv.org site. Here is the original link: http://arxiv.org/abs/1110.2685

    The second statement about the arxiv article writer is also partially correct. Ronald van Elburg, the writer of the arxiv paper, is not an MIT physicist, or a physics professor. Nor does he appear to be a “science writer.” (Van Elburg’s personal website: http://home.kpn.nl/vanelburg30/ ). He has a Ph.D. in Physics from the Institute for Theoretical Physics at the Univ. of Amsterdam, writing a dissertation involving the quantum Hall effect. He is currently a Post-Doc in the Sensory Cognition Group <http://www.ai.rug.nl/alice/programmes/lsc-intro/acg/index.html> at the department of Artificial Intelligence of the Rijksuniversiteit Groningen. A look at his publications link on his website reveals only professional research papers, no links that could be called the work of a “science writer.” Incidentally, a glance at the same publications link does mention that, in response to various objections offered by the GPS community, Van Elburg has revised the original arxiv paper. The revision is not yet available on the arxiv, but is referenced on Van Elburg’s publication page. I might add that Van Elburg’s CV on his site has not been updated since 2007.

    2. Whether or not Van Elburg’s analysis of non-euclidean effects in GPS is correct, there still remains the question, can unaccounted for spacetime effects explain the supposed FTL neutrinos? As it happens, I have first hand information about this from one of the original designers of the GPS system, MIT professor Edwin Taylor, whose field is experimental GR and who has co-authored three famous books on relativity with John Wheeler (the Big Black Book and others). According to the talk I heard at AAPT/APS by Taylor, the GPS system is built to account for both special and general relativistic departures from the euclidean/newtonian kinematic structure. A good discussion of this appears here… http://www.astronomy.ohio-state.edu/~pogge/Ast162/Unit5/gps.html , and is also covered in Taylor and Wheeler’s book Exploring Black Holes, An Introduction to General Relativity (Addison Wesley, 2000).

    Thus, the designers of GPS already accounted for relativistic kinematic effects of the satellite motion. We are left to wonder, is the Van Elburg paper talking about an additional effect not considered by the designers, or, alternatively, is Elburg unaware that relativistic kinematics and dynamics is already programmed into GPS? This question does not seem to be addressed anywhere.

    In any case, my money is on Glashow (who, as we know, won the Nobel for electroweak unification), and who used an argument based upon electroweak unification (I hope no one needs to be reminded that QED is fully relativistic!) to argue that the FTL claim is patently impossible. He calculated the first order Feynman diagram for the most likely process (nu –> nu, e+, e-), and concluded that the FTL neutrino claim was falsified on that basis. The word Glashow used is “refuted.” Here’s the link to Cohen and Glashow’s paper, http://arxiv.org/abs/1109.6562

  20. Havent been tracking this for the last couple of weeks. What happened to the neutrino decay discussion by Sheldon Glashow?

  21. I could easily imagine that a lot of people have made models that accidentally produced faster-than-light velocities for something

    No, it doesn’t work that way. People accidentally get tachyonic scalars all the time — this means they’re perturbing around a maximum of a potential rather than the minimum, so the theory has an instability and they should recompute around the right vacuum. But you can’t just “accidentally” get a faster-than-light fermion in quantum field theory. Fermion masses can have complex phases, but this has nothing to do with their speed.

    That’s why almost all the papers on hep-ph have focused on Lorentz-violating theories, but ignored all the experimental bounds on Lorentz violation that constrain it to be smaller than 10^{-15} or less in many cases. These papers are just flat-out wrong, and the authors didn’t do their homework. You can’t interpret an experiment in isolation, you have to look at all the other experiments that have been done in the last century.

    @bobh in 19: It’s completely valid and also kills pretty much any imaginable faster-than-light interpretation of the data. (Also, it’s not just Glashow — also Cohen.)

  22. 88 papers now citing the OPERA paper

    I’m not a high-energy person either, and I don’t doubt that many of these papers would never have seen the light of day outside of the arXiv. (I hear that there are some minimal kook filters installed on the arXiv, but the existence of parody sites suggests that these kook filters cannot be much more than minimal.) But how many of them are really phenomenologists trying to explain the results, and how many of them are critics? I assume that Cohen and Glashow is one of those 88 papers; it is standard practice to cite a result you are specifically critiquing.

  23. Chad Orzel,
    Hi, I posted a rather lengthy comment on this blog this morning, and notice that other comments have been posted, but mine has not. I am just beginning to get into this blogosphere place, having taught AP Physics in high school for 30 years. Any information on what happened?

    Thanks,
    Roy Lovell
    Atlanta, GA

  24. I was suggesting the open-source model as a way to make the scientific process more transparent, and also as a way to provide some discipline to the discussion. Open-source development is a structured environment. It seems like the usual model of academic/research physics is not quite up to addressing this controversy, at least not quickly. I am not sure what the OPERA group intended to achieve by saying, in effect “we’ve looked at everything we could think of, and there is probably some mistake, so please help us find it”, when the answer could be buried deep in some detail that was not described in their paper. Of course, now if some other group reproduces their result there will a lot more scrutiny to see whether the experiment was really an independent test.

  25. I think it’s a mistake to interpret suggestions as insults. But I don’t think you’re stupid for making it. 😉

    That’s not to say that such an opinion is not apparent in the tone of some suggestions, but I skimmed the van Elburg preprint a few days ago, and I didn’t come away with the impression that he thought poorly of the OPERA team.

  26. Experimentalists are not generally idiots, but even non-idiots make stupid errors, and the results look more consistent with a timing error than with new physics. Thus, this article has at least a veneer of plausibility, though without more information it’s very hard to tell what the error actually is.

  27. Also, in response to Roy’s comment (which I didn’t read as I was freeing it from the spam filter): you have misunderstood the sentence you quote. The person who is not a physicist but a science writer is the blogger who runs the Physics Arxiv Blog, not van Elburg, the author of the much-cited preprint.

  28. I’m a pulsar astronomer, and I actually wrote a blog post about the time synchronization issue.

    One key point that nobody seems to acknowledge is that these folks are not just using GPS; they’re using GPS-corrected atomic clocks. What’s more, they had the synchronization of their time standards tested by two separate metrology groups, who found it was fine. I’m pretty sure it’s not the time standards; we use similar gear to timestamp our pulsar observations (which, amusingly, provide some of the most stringent tests of relativity to date).

    The experimenters are not idiots, and synchronizing time standards between distant labs is a solved problem. There are plenty of other places to make a subtle mistake; people are seizing on GPS because it’s something they can understand. In software-development circles we call this “bike-shedding”.

  29. If you look at how the data was handled in Brunetti’s thesis

    http://operaweb.lngs.infn.it:2080/Opera/ptb/theses/theses/Brunetti-Giulia_phdthesis.pdf

    (33 MB file)

    There is unexplained “filtering” applied to the proton beam waveform to remove 30-60 ns oscillations that are called “noise”, but the source of the “noise” is never really explained (see figure 8.4 and 8.5).

    There is no justification in the thesis or elsewhere (that I could find) for “filtering” the data to remove the “noise”. If your data has lots of 30-60 ns noise on it, you can’t just “filter” it and get sub-10 ns accuracy.

    Maybe I am not understanding what was done, but I don’t understand how “filtering” is an acceptable way of generating 10 ns accurate data.

  30. daedalus2u, this might be one of those very rare times when more than 4 people read a dissertation…

  31. The OPERA experiment is not the first one refuting Einstein’s 1905 light postulate. In 1887 the Michelson-Morley experiment UNEQUIVOCALLY refuted the assumption that the speed of light is independent of the speed of the light source and confirmed the equation c’=c+v given by Newton’s emission theory of light and showing how the speed of light varies with v, the speed of the source relative to the observer:

    http://philsci-archive.pitt.edu/1743/2/Norton.pdf
    John Norton: “In addition to his work as editor of the Einstein papers in finding source material, Stachel assembled the many small clues that reveal Einstein’s serious consideration of an emission theory of light; and he gave us the crucial insight that Einstein regarded the Michelson-Morley experiment as evidence for the principle of relativity, whereas later writers almost universally use it as support for the light postulate of special relativity. Even today, this point needs emphasis. The Michelson-Morley experiment is fully compatible with an emission theory of light that CONTRADICTS THE LIGHT POSTULATE.”

    It can be shown (by applying the equivalence principle) that the Pound-Rebka experiment also confirms the emission theory and is incompatible with the light postulate.

  32. First, the guy is a theoretical physicist. Second, GPS is engineered for position estimation. It does a best effort at time calibration but is based on a burst of relative (not absolute) timings yielding the position estimate for your navigator (which then even needs Kalman filtering on top of this, to be usable, at all). So it is an absolute riddle to me how physicists in a multi-million dollar experiment would trust a engineers’ toy which was not designed to be a calibrated clock in the first place. Third: no sign has been given until today that the Lorentz contraction was properly corrected for. Fourth: And why isn’t there, e.g., a halfway’s detector for midpoint calibration at a presumably synchronized reference time Tr? ‘Certainty’ cannot be bought, and especially not from a government based metrology institute as the calibrator (METAS). Too much trust kills, says the engineer.

  33. What they need to do is add more hardware at CERN so they can send neutrino beams to the various other neutrino detectors around the world.

    Ice cube would be a good first target. It is big enough that the statistics might be a bit better.

  34. Chad, I had exactly the same reaction as you, and mentioned this in a comment on Phil Plait’s blog.

    The kinematic argument by Glashow is a serious one. A new preprint mentioned by Dorigo here seems to confirm that the energy distribution of the neutrinos arriving in Gran Sasso doesn’t show any obvious sign of bremsstrahlung effects. That implies that the ICARUS folks see neutrino energies compatible w/ special relativity expectations. So, either OPERA is wrong (most likely option by far, but it would still be nice to know how and why), or ICARUS is looking at different neutrinos (I haven’t looked in detail to figure this out), or Glashow’s bremsstrahlung process is forbidden somehow (i.e. kinematics and the weak interactions do weird things for superluminal neutrinos).

  35. daedalus2u: What they need to do is add more hardware at CERN so they can send neutrino beams to the various other neutrino detectors around the world.

    That sounds like a good idea: repeat the experiment by aiming at a substantially more distant target facility and compute the results. If the result is yet again ~60ns then it points to errors in the apparatus somewhere. If it increases in relation to the distance….. (insert spooky music here).

  36. What they need to do is add more hardware at CERN so they can send neutrino beams to the various other neutrino detectors around the world.

    Ice cube would be a good first target. It is big enough that the statistics might be a bit better.

    I suspect that the collimation of the beam isn’t good enough for this to be practical. Over the 730 km to Gran Sasso it expands to several kilometers in width. Bump that distance up by an order of magnitude (give or take– I’m not sure what the straight-line distance from CERN to Antarctica would be, but it’ll be on the order of the radius of the Earth, which is 6400 km), and you’ve got a very diffuse beam. At which point you need a long integration time to build up substantial signal, even with a detector the size of IceCube. Remember, it took them three years (give or take) to get the data they have– depending on the expansion rate of the beam and the geometry of everything, you might be looking at an experiment lasting longer than the half-life of a graduate student.

  37. The Gran Sasso detector was only ~6.7 meters x 6.7 meters in cross section with about 20 tons of lead per m2. The diameter of the Earth is ~12,700 km, so the increased dispersion would be 12,700/730 = 17.

    A factor of 17 means they need ~120 meters x 120 meters in detector cross section to get equivalent detection. A depth of 20 tons of water/m2 would be ~20 meters of ice. The IceCube array is ~1 km deep and in cross section, so my guess is they would have maybe 3,000 times the detection efficiency (if I have calculated it right), 8x8x50 = 3000.

    The distance is also a lot longer, the time delay measurement would be a lot easier.

  38. A factor of 17 means they need ~120 meters x 120 meters in detector cross section to get equivalent detection. A depth of 20 tons of water/m2 would be ~20 meters of ice. The IceCube array is ~1 km deep and in cross section, so my guess is they would have maybe 3,000 times the detection efficiency (if I have calculated it right), 8x8x50 = 3000.

    IceCube is indeed 1km deep and 1km on a side, in terms of detector volume. However, the neutrino energies probed by OPERA (tens of GeV) are well below the effective detection threshold of IceCube (TeV or greater). The central, more densely-instrumented region of IceCube (DeepCore) can in principle detect neutrinos down to tens of GeV, but the effective collecting volume is still not equivalent to even its smaller geometric volume, given that the detector is much more sparsely instrumented (several meters between individual “pixels”) than something like OPERA.
    Another problem with using IceCube to test the OPERA result: how is CERN going to point its neutrino beam down to aim at the South Pole? It can aim for LNGS because it is roughly a horizontal path (and thus close enough to the plane of the accelerator ring), but the CERN-IceCube path is a much greater chord through the earth.
    What OPERA really needs is a much closer detector like what MINOS uses in its beam from Fermilab. The Near/Far Detector pairing gets rid of many of the systematics that plague an experiment with only a single (Far) detector like OPERA.

  39. “the actual event timing comes from clocks on the ground that are at rest with respect to the source and detector”

    Only they aren’t.

    Each clock is rotating around its respective source/detector, and the source and detector are rotating around each other. They are not in an inertial coordinate system, let alone the same inertial coordinate system. One would hope they considered all of this (see @30) but it isn’t clear from the article that they did. Remember, the point of the original preprint was to get input from groups who might consider something they did not.

  40. The Gran Sasso detector was only ~6.7 meters x 6.7 meters in cross section with about 20 tons of lead per m2. The diameter of the Earth is ~12,700 km, so the increased dispersion would be 12,700/730 = 17.

    Actually, what matters for the flux is the area of the beam, which would go as the square of the radius. So it’s a factor of 289. Then there’s the energy and resolution issues KWK mentions.

    Each clock is rotating around its respective source/detector, and the source and detector are rotating around each other. They are not in an inertial coordinate system, let alone the same inertial coordinate system. One would hope they considered all of this (see @30) but it isn’t clear from the article that they did.

    True, but that’s not the analysis in the van Elburg preprint. The shift he’s talking about appears to be based on the notion that the timing is coming from the GPS satellites, which is is not. The timing of the source events is determined by an atomic clock on the ground at the source, and the timing of the detector events is determined by an atomic clock on the ground at the detector. These are synchronized using common-view GPS, which is a well established method.

    It may be that there is some synchronization issue due to the non-inertial nature of the rotating Earth, but that’s a different issue than the Lorentz contraction argument that’s gotten all the press. I think that shift may also go in the wrong direction, but I could be wrong about that.

  41. Why don’t they just do the control experiment I suggested in Comment 11? This would settle the issue. (Maybe there are technical reasons why they can’t do it…)

  42. Why don’t they just do the control experiment I suggested in Comment 11? This would settle the issue. (Maybe there are technical reasons why they can’t do it…)

    The SN1987 neutrinos were some orders of magnitude lower in energy than the OPERA beam, and it’s not a trivial matter to reduce the energy of the beam, I think. The energy is determined by the energy of the incoming protons, and the proton source they use is engineered to provide a specific energy.

    And, of course, even if they did create a lower-energy beam, it took them three years (give or take) to get the data they have now, so it would take quite a while to get enough data for a good test.

  43. @Anne at comment 30, the pulsar astronomer:

    Surely, the experimenters are not idiots.

    But do you think that they are looking the whole time at their devices?

    Of course not! They have more important things to focus on.

    -and what if they have not seen a small (but important) red light on the GPS?

    http://www.decodesystems.com/help-wanted/ttxlak-1.jpg

    In the report:

    “However, after some period of time the unit shows a red status LED and reports that the GPS is unlocked.”

    When the device is unlocked and still APPEARS to show the correct time, how would this be interpreted?

  44. Chad. Nice article. With regards to your comment “There’s still an issue here, of course, as Matt McIrivn reminded me on Google+, because it’s impossible to perfectly synchronize clocks in a rotating frame, but that doesn’t seem to be what this attempted explanation is about.”). There was an earlier paper by Contaldi (http://arxiv.org/abs/1109.6160) that first poses the question about clocks in rotating frames. It is a nice paper but unfortunately Coltandi’s assumption on how OPERA synched its clocks was not correct. Too bad.

    There are now about 100 papers on the theory of superluminal neutrinos, but only a handful on potential experimental errors/corrections applicable to the measurement. I hope that ratio changes soon.

  45. You don’t need to be an idiot to make a “stupid” mistake. The error that lost the Mars climate orbiter was a doosey, but I doubt it was made by an idiot. A fool, perhaps, but not an idiot.

    It is precisely because we think a mistake would be stupid that we don’t consider that we might have made it, but we all make those mistakes. We all do things as embarrassing as forgetting to turn on a piece of equipment. That sort of thing usually slaps you in the face right away, but if the mistake results in something subtle, we tend to look for some subtle error. We look for relativistic effects or arcana in the workings of GPS, but it could be as simple as an out of spec coaxial cable which was not calibrated, and was assumed to perform exactly like an identical cable.

  46. My initial reaction to this story is that it was probably a timing error on the creation event, since CERN doesn’t need to know the absolute Global Atomic time of an event for their purposes, but just need good relative precision for control purposes. They throw away a lot of data, so how is the creation time of the neutrinos being determined? Unless the physicists doing the neutrino experiment are doing an independent detection of the event with a relatively simple detector, I would not trust the timing of the event given them by the very complicated machine that is the LHC. This is basically the halfway detector method mentioned by DIY, but it would just measure some of the copious other particles produced.

  47. OK, I finally looked at the thesis and it appears they are doing their own independent measurement of the creation event. Did not understand they are using independent beam lines and are not actually using the LHC. Makes sense as you can’t really aim that. So there goes that theory though they still have to check the timing on that, which they probably have done many times. A half-way detector for the neutrinos would make sense, though given the expense that is not going to happen. Curiouser and curiouser.

  48. William Fairholm,

    Do you realize that a half-way detector would require that the detector be dug somewhere 10 km deep?

Comments are closed.