There have been a lot of pixels spilled over this faster-than-light neutrino business, so it might not seem like something I should take time away from pressing work to write up. It is the story of the moment, though, and too much of the commentary I’ve seen has been of the form “I am a {theorist, journalist} so hearing about experimental details gives me the vapors” (a snarky paraphrase, obviously). This suggests that there’s still room for a canine-level write-up going into a bit more depth about what they did and where it might be wrong.
So, what did those jokers at CERN pull this time? Isn’t it bad enough that they want to feed us all into a black hole, now they’re messing with the speed of light? First of all, this wasn’t a CERN experiment in the same way that the LHC is. The experiment reporting on this uses a particle accelerator at CERN, but it’s actually an Italian collaboration who did the experiment and data analysis, with the principal detector based at the Gan Sasso underground laboratory in the Alps. The name of the collaboration is OPERA, which is one of those ghastly pseudo-acronyms where they use the second letter of one of the words in order to force it to spell something.
OK, fine, what have the Italians done? Well, the goal of their experiment is to look for an “oscillation” of neutrinos. The neutrinos are created at CERN in one of their three varieties, and on the way to Gran Sasso, they can change character and end up being detected as a different type. (They start as muon neutrinos and end up as tau neutrinos, or at least that’s the plan. It’s not terribly important for this experiment.)
As part of the preliminary analysis for their main experiment, they looked at about three years worth of data, and noticed something odd: the neutrinos in their experiment seem to be moving slightly faster than the speed of light. The difference is pretty big in absolute terms– about 7500 m/s, or nearly 17,000mph– but it’s only about 1/40,000th of the speed of light. Still, the difference they see is many times larger than their uncertainty, and they can’t figure out why, so they’re making their results public.
Wow. How do they measure that, anyway? Conceptually, what they did is the most basic kind of velocity measurement, the sort of thing we talk about in the first few weeks of introductory physics. They measured the distance between CERN and Gran Sasso, and divide that by the time between when the neutrinos are created and when they’re detected to get the speed at which the neutrinos covered that distance.
The implementation, of course, is a little more complicated than that…
But if they know when a neutrino was created and when it was detected, how could they screw this up? That’s the first complication. Neutrinos are ridiculously difficult to detect, so they have to throw a whole lot of them at their detector to see anything. In the entire 3-year dataset they analyzed, they only have around 16,000 neutrinos total.
This means that they don’t actually have the ability to say when a specific neutrino was created, because there are huge numbers of neutrinos created for every one that they detect.
How can they possibly claim anything about the speed, then? What they do is to compare the distribution of times when they detect neutrinos to the distribution of times when neutrinos were created in the source. The way this works is that they blast a moderately high-energy proton beam into a stack of graphite, which creates some exotic particles that eventually decay into other things, spitting out neutrinos along the way. The other decay products get blocked by the several thousand kilometers of rock between CERN and Gran Sasso, but the neutrinos interact so weakly with ordinary matter that they go right through, and a handful of them get detected.
They can’t tell much of anything about exactly when the neutrinos are created, but they know that the neutrinos had to come from one of the protons in their original beam. And they have very good diagnostics on the proton beam. So, they argue that the distribution of arrival times of the the neutrinos ought to have the same shape as the distribution of arrival times of the protons, so they compare those distributions as a whole, which looks sort of like this:
(That’s Fig. 11 from the freely available arxiv preprint.)
The black points (with error bars) are the total number of neutrinos detected in a given 150 ns window around a particular arrival time (on the horizontal axis). The red lines show the shape of the proton beam distribution in time. At the top, there’s no correction for the time of flight, and you can see that the two distributions are shifted relative to one another (the two graphs are for different subsets of their data). At the bottom, they’ve put in a correction for the time of flight (basically, moving all the times for the red curve left by a microsecond), and you see that the red line fits the observed distribution of neutrinos pretty nicely.
So, that’s the thing they call the time of flight? Exactly. If you look at that time, and compare it to what you expect for particles moving at the speed of light, you find that it’s about 60ns too short, suggesting the neutrinos were moving faster than they speed of light.
OK, so how did they get the time wrong? Well, that’s the problem. They think they can account for all the uncertainties in the timing process, and it’s not enough to cover the observed shortfall. In fact, the fact is about six times bigger than they think their measurement uncertainty is.
How do you get that kind of timing accuracy, anyway? I mean, I can’t buy a watch that’s good to better than a second. How do you keep track of things to the nanosecond? Nanosecond timing isn’t all that difficult by itself– you can get an atomic clock easily enough. What’s tricky is synchronizing the timing at two different places. That’s a fiendishly difficult problem, and it’s more or less that sort of thing that led to the whole theory of relativity.
And you have a book about that coming soon, we know. How do they do the synchronization? They take advantage of GPS for that. The Global Positioning System is a network of satellites containing atomic clocks, each beaming out a signal saying what time it is according to that clock. Measuring the difference in times between several satellites lets you figure out how far you are from each of them, and since the orbits of the satellites are well known, that tells you where you are on the surface of the Earth.
They use a “common view” GPS system to coordinate their timing. They have an atomic clock at each end of the experiment, and as the name suggests, they use the time broadcast by GPS satellites that are visible to both ends at the same time as a reference to make sure their clocks are synched up. The difference between the two sites is around two nanoseconds, much smaller than the arrival time shift they see.
So, is that the only problem with the timing? Not quite. There’s also some ambiguity about where the neutrinos are created, which might cause a problem. The exotic particles created by the proton beam pass down a 1-km tunnel, and the decay that produces the neutrinos happens somewhere in there, but they can’t tell exactly where.
Wait a minute– their source is a mysterious 1km tunnel, but that doesn’t throw off their timing accuracy? The claim is that the particles that decay to make the neutrinos are also moving at very nearly the speed of light, and so pass down the tunnel at almost the same speed as the neutrinos. This makes the measurement less sensitive than you might think to exactly where the neutrinos are created– what matters is not the total uncertainty in the position, but the difference in travel time for a neutrino created partway up the pipe versus one created all the way at the end, and the speed of the parent particles is so close to the speed of the neutrinos that there’s very little difference in travel time. They’ve checked this with simulations, and say it’s a tiny effect– a fraction of a nanosecond.
OK, that’s two places the timing might be wrong but they say it isn’t. How about the distance? Could they have screwed that up? That’s the other obvious source of error, but it’s hard to see how. Again, they have GPS to use for this, and while the accuracy of the position obtained by GPS for a moving receiver, like in your phone, is only several meters, if you’re trying to measure the distance between two fixed points, and monitor it over a long time, you can get really good accuracy. They claim to have the distance down to 20cm, which is a bit less than a nanosecond at the speed of light.
Twenty centimeters? Really? Really. They even provide a graph showing their measurements over the three-year run, which pick up a slow change due to continental drift, and a dramatic jump due to an earthquake in 2009:
(That’s Fig. 7 from the paper, and the greyish background is part of the figure. do not adjust your monitor.)
You know what? Living in the future is pretty awesome? Yes, yes it is.
OK, so, what, did the distance shrink in the winter or something? It doesn’t look that way. They looked for changes in the arrival time for different times of day, different times of year, and so on, and didn’t find any significant difference. They’ve also checked the distance by other methods– sending light down fiber optics, etc.– and their answers agree. As far as they can tell, they know the distance to twenty centimeters out of 730 km, full stop.
So, what else could be wrong? Well, that’s the problem. They’ve checked all the obvious things, and they all seem to hang together. Which is why they’re putting this result out there, knowing full well that it disagrees with just about everything else. They’re hoping that some clever person will spot a mistake, or, failing that, that another experiment will do the same test (there’s one in Japan and one in the US), and see if they get the same result.
Doesn’t this conflict with the supernova stuff Macho Ethan was on about? Probably, but maybe not. The neutrinos involved there had very low energies, compared to the ones used here. It could be that really high-energy neutrinos travel at a different speed than really low-energy ones. They don’t see any such dependence on energy in their results, but they can’t check that much of an energy range, so it might be that there’s something going on there.
Could it really be that neutrinos are moving faster than light? Maybe. Theoretical physicists, particularly theoretical particle physicists, are nearly infinitely flexible. If another experiment finds the same result, I have no doubt that theorists will find a way to accommodate it.
It’d be deeply, deeply weird, though, not least because the existence of superluminal particles that interact with ordinary matter (as neutrinos do, albeit weakly) opens the door to violations of causality– effects happening before the things that caused them, and that sort of thing. This wouldn’t be a big loophole– the speed difference is tiny, and neutrinos interact extremely weakly– but it’s the kind of philosophical problem that would really bother a lot of people.
So, if you had money to bet on it, bet that this result is wrong. But these guys aren’t complete chumps, and if something is wrong with their experiment, it’s something pretty subtle, because they’ve checked all the obvious problem areas carefully.
The OPERA Collaboraton: T. Adam, N. Agafonova, A. Aleksandrov, O. Altinok, P. Alvarez Sanchez, S. Aoki, A. Ariga, T. Ariga, D. Autiero, A. Badertscher, A. Ben Dhahbi, A. Bertolin, C. Bozza, T. Brugiére, F. Brunet, G. Brunetti, S. Buontempo, F. Cavanna, A. Cazes, L. Chaussard, M. Chernyavskiy, V. Chiarella, A. Chukanov, G. Colosimo, M. Crespi, N. D’Ambrosios, Y. Déclais, P. del Amo Sanchez, G. De Lellis, M. De Serio, F. Di Capua, F. Cavanna, A. Di Crescenzo, D. Di Ferdinando, N. Di Marco, S. Dmitrievsky, M. Dracos, D. Duchesneau, S. Dusini, J. Ebert, I. Eftimiopolous, O. Egorov, A. Ereditato, L. S. Esposito, J. Favier, T. Ferber, R. A. Fini, T. Fukuda, A. Garfagnini, G. Giacomelli, C. Girerd, M. Giorgini, M. Giovannozzi, J. Goldberga, C. Göllnitz, L. Goncharova, Y. Gornushkin, G. Grella, F. Griantia, E. Gschewentner, C. Guerin, A. M. Guler, C. Gustavino, K. Hamada, T. Hara, M. Hierholzer, A. Hollnagel, M. Ieva, H. Ishida, K. Ishiguro, K. Jakovcic, C. Jollet, M. Jones, F. Juget, M. Kamiscioglu, J. Kawada, S. H. Kim, M. Kimura, N. Kitagawa, B. Klicek, J. Knuesel, K. Kodama, M. Komatsu, U. Kose, I. Kreslo, C. Lazzaro, J. Lenkeit, A. Ljubicic, A. Longhin, A. Malgin, G. Mandrioli, J. Marteau, T. Matsuo, N. Mauri, A. Mazzoni, E. Medinaceli, F. Meisel, A. Meregaglia, P. Migliozzi, S. Mikado, D. Missiaen, K. Morishima, U. Moser, M. T. Muciaccia, N. Naganawa, T. Naka, M. Nakamura, T. Nakano, Y. Nakatsuka, D. Naumov, V. Nikitina, S. Ogawa, N. Okateva, A. Olchevsky, O. Palamara, A. Paoloni, B. D. Park, I. G. Park, A. Pastore, L. Patrizii, E. Pennacchio, H. Pessard, C. Pistillo, N. Polukhina, M. Pozzato, K. Pretzl, F. Pupilli, R. Rescigno, T. Roganova, H. Rokujo, G. Rosa, I. Rostovtseva, A. Rubbia, A. Russo, O. Sato, Y. Sato, A. Schembri, J. Schuler, L. Scotto Lavina, J. Serrano, A. Sheshukov, H. Shibuya, G. Shoziyoev, S. Simone, M. Sioli, C. Sirignano, G. Sirri, J. S. Song, M. Spinetti, N. Starkov, M. Stellacci, M. Stipcevic, T. Strauss, P. Strolin, S. Takahashi, M. Tenti, F. Terranova, I. Tezuka, V. Tioukov, P. Tolun, T. Tran, S. Tufanli, P. Vilain, M. Vladimirov, L. Votano, J. -L. Vuilleumier, G. Wilquet, B. Wonsak, J. Wurtz, C. S. Yoon, J. Yoshida, Y. Zaitsev, S. Zemskova, & A. Zghiche (2011). Measurement of the neutrino velocity with the OPERA detector in the CNGS
beam CERN arXiv: 1109.4897v1
Hi
Er
“they know the distance to twenty centimeters out of 7500 km, full stop.”
Isn’t it 730 Km?
Otherwise great article.
Uh
730534.61 meters. Full stop. 😉
from http://arxiv.org/abs/1109.4897
I meant to schedule this for tomorrow, giving me time to proofread a little more. Unfortunately, I scheduled it for today instead, on my way out the door to walk Emmy, and it published before I got back, typos and all.
So your saying that if one recorded the GPS coordinates of 2 points over time then you could use the average distance between them to achieve a very accurate result? just using the normal GPS we all have in our phones?
I have posted an interview with two of the CERN study authors.
http://www.lukesci.com/2011/09/24/interview-with-cern-neutrino-study-authors/
Most of matter here on earth is empty space, but a very small amount is the nuclei of atoms. Could the timing anomaly be reflective of a rapid propagation of the neutrinos through the nuclei of atoms?
How well does GPS work underground? I gather than CERN is underground and most neutrino detectors are built underground as well. I can’t get my GPS to work squat when I’m in a forest with just tree canopies overhead, but I’m just using the unit in my phone.
Otherwise, I’ll assume that they placed the GPS unit outside their installation and carefully measured the distance from each corresponding GPS units to the emitter and the detector respectively. (I didn’t see a description of this in the paper, but I assume it is there, or perhaps, they do have much better GPS units than mine.)
Does the GPS distance calculation give the distance across the curved surface of the earth, or of the straight line that the neutrinos traveled?
What about gravitational waves hitting earth, introducing extra spacetime curvature? The effect would probably be way too small.
Maybe the speed of light has changed?
Maybe the ultimate speed limit is not the speed of light?
Oh, but they were taking data for 3 years, right? So it definitely can’t be due to gravitational waves.
Hmmm.
Kaleberg raises a good point. GPS signals have an error budget, analogous to an uncertainty principle. They could optimize for depth or for surface coordinate accuracy when building the system and chose to optimize for surface coordinates since that is much more useful in general. The accuracy of surface elevations is so degraded that you can’t use the elevation readings for oilfield geophysics work, at least with handheld units/cellphone, etc. So there may be unknown uncertainty there?
I still have money on experimental error somewhere.
Also, that graph showing stability of position readings tells us nothing useful. It tells us that the location of one receiver is slowly changing, but we care about the displacement between positions, not the constancy of the endpoints…
The reason I say I believe there’s an error somewhere is simply that they are using GPS positions to claim accuracy in time and distance.
However, if this result is not in error, then they will have to derive a mathematical description for how to describe/update the ephemera for the GPS satellites that does not fundamentally rely on relativity and also provides this result. The current system fundamentally depends on relativity, no?
What would be the influence of the Earth’s gravity on the experiment? I gather the neutrinos are moving through a changing gravitational field. Wondering how that might affect things.
Yes, it’s probably a measurement error.
But let’s speculate it’s not. Given the supernova data, it would seem to me the result doesn’t say that neutrinos move faster than the speed of light but that some of the decaying particles that produce the neutrinos move faster than the speed of light. You could save some ns then before they even decay.
Could the presence of matter, the substance of the Earth, affect the speed of the neutrinos?
Just trying to suggest a possible explanation of why the supernova neutrinos arrived after the photons – assuming the OPERA results are valid.
Essentially: could the presence of matter affect either one or more of the compact spatial dimensions (assuming a version of M-Theory is correct) or how the neutrinos interacted?
Another proofreading note:
… with the principal detector based at the Gan Sasso underground laboratory in the Alps.
The Apennines. The beam shoots trough the Alps. Or below them. (Geologically that is like mixing the Appalachians and the Rockies.)
Otherwise, my guess is that the problem is in timing at the CERN end of things.
So far I’ve seen the experiment described as ‘European’, ‘international’, ‘French’, ‘Italian’, ‘Swiss’, and ‘Franco-Italian’.
Since they are measuring the times using averages of an entire collection is it possible that you can get some new physics without tossing out relativity by having the neutrinos that are detected being preferentially created earlier in the burst?
Kaleberg: How well does GPS work underground? I gather than CERN is underground and most neutrino detectors are built underground as well. I can’t get my GPS to work squat when I’m in a forest with just tree canopies overhead, but I’m just using the unit in my phone.
They used a more sophisticated system than a standard phone unit, and had the advantage of time to wait– the data collection took three years, and they didn’t have to worry about the position changing dramatically on short time scales. But yes, they used reference points aboveground.
Also, that graph showing stability of position readings tells us nothing useful. It tells us that the location of one receiver is slowly changing, but we care about the displacement between positions, not the constancy of the endpoints…
Well, if they track both endpoints, then it’s a simple matter to track the displacement. Which, I believe, is what they do. I’m not sure why they show that graph for only one endpoint, but it probably just looks more impressive in that format. There are some oddities about the way the paper is put together (the wildly different styles of the figures, for example) that I think come from the fact that this isn’t a real submitted-for-publication paper.
Mausy: What would be the influence of the Earth’s gravity on the experiment? I gather the neutrinos are moving through a changing gravitational field. Wondering how that might affect things.
Minimally. What would matter for general relativity would be the difference in gravitational potential between the two ends of the experiment. A 1km change in altitude between the two endpoints would change the rate at which their clocks tick by about one part in 1013, nowhere near the one in 105 of the result they see.
Bee: But let’s speculate it’s not. Given the supernova data, it would seem to me the result doesn’t say that neutrinos move faster than the speed of light but that some of the decaying particles that produce the neutrinos move faster than the speed of light. You could save some ns then before they even decay.
They’d need to be moving a lot faster than light, though, because of the argument they make about the relativistic speed of the parent particles and the time uncertainty. If the difference is coming from the parent particles moving faster through the 1km decay pipe, you’d need them to be moving at something like 1.02c. I suspect that’s ruled out by the accelerator experiments that discovered the parent particle types in the first place.
The reason I say I believe there’s an error somewhere is simply that they are using GPS positions to claim accuracy in time and distance.
I haven’t looked at the distance question yet, but they take the time synchronization pretty seriously. Briefly, they handle the time synchronization by using GPS-stabilized cesium atomic clocks, and they had two separate national metrology organizations check their clock synchronization and confirm that it’s good to about 2 nanoseconds. It’s not the clocks.
One effect I haven’t seen anyone mention yet is that they need to take into account the earth’s rotation during the time the neutrinos are in flight (or, from another point of view, the curvature of the neutrino path by the Coriolis force). But the magnitude of the effect is less than a few nanoseconds, and so probably not relevant.
Since they are measuring the times using averages of an entire collection is it possible that you can get some new physics without tossing out relativity by having the neutrinos that are detected being preferentially created earlier in the burst?
It’d need to be a pretty subtle effect, as the distributions look pretty similar on both the front and back ends, but that’s the kind of thing that needs to be looked at, yes.
One effect I haven’t seen anyone mention yet is that they need to take into account the earth’s rotation during the time the neutrinos are in flight (or, from another point of view, the curvature of the neutrino path by the Coriolis force). But the magnitude of the effect is less than a few nanoseconds, and so probably not relevant.
The full time of flight for 730 km is only a few milliseconds, during which time the Earth rotates by maybe a meter (at the equator, less at European latitutdes). So, yeah, I wouldn’t expect that to come into play.
If it is a real phenomenon and not a subtle measurement error, I still doubt that it’d be a result of actual superluminal neutrinos. I imagine that particle physics can accommodate it. Theoretical particle theorists are ingenious.
See comments at the relevant “Starts With a Bang” thread, many have proposed ways the measurements could be in error.
BTW, if some neutrinos are nevertheless FTL and others not (question: what energy range and what type, and be superluminal?), we can call them newtrinos as an unexpected, effectively new subcategory of particle. Or maybe, whichever neutrinos are superluminal can be called tachinos (I think I have coined that, based on Internet search.) (PS: we have a problem defining their energy of course. I wonder who is thinking about that.)
Note also: I had some discussions at a picnic yesterday loaded with capable types like a Jefferson Lab (VA) nuclear physicist with whom I’ve done projects. He noted that maybe photons themselves don’t even travel at the physical limit c, because of a tiny mass (or, as I thought, interaction with background “syrupyness” of space like caused by dark energy and quantum effects, that might actually slow down photons more than neutrinos.) No, it doesn’t violate the semantics of “speed of light” because we would treat “c” as a physical parameter of limit based on the absence of other factors, as what would determine actual time dilation, causal factors, etc. even if light is inhibited from going that fast.
So, it’s pulses of neutrinos that are produced and pulses that are detected? But we already know that pulses of light can travel faster than c… You only have to do a search for “fast light” on google scholar to see thar optics folks can also produce pulses and then detect the peak of the pulse at a time less than L/c later on a detector located a distance L away.
In optics it has been established that this does not violate relatvity. The effect is caused by dispersion… Different frequency components of the pulse see different indices of refraction. It can only happen if the pulse propagates through a medium, not in free space. And it can’t be used to send information faster than the speed of light, estentially becauae real world pulses are never infinite in duration, the always have a sharp turn on and turn off somewhere, meaning they will have a broader frequency spectrum than wthe bandwidth of a realistic dispersion of this type. Also, long pulses have narrower bandwidths than short pulses, and long pulses limit your bit rate… There’s theoretical papers out there that carefully distnguish between pulse velocity and information velocity, and it is the latter that relativity says must ge slower than c… And careful calculation shows that even in these “fast light” systems, it always is.
So is there any chance that the earth somehow has a dispersive index of refraction for neutrinos?
Like Anne @20, I also wondered about the Earth’s rotation, but for a different reason. However, I’ve been too busy reading blogs and grading and enjoying Fall to read the paper carefully enough to see if they implicitly assumed an inertial coordinate system (special relativity) where the Sagnac effect raises its ugly head. I think the N-S rather than E-W arrangement reduces the effect, but when known systematic errors are allegedly as small as statistical errors in an experiment with just thousands of events, everything matters.
BTW, that reminds me of my biggest question concerning experimental physics: if you know your possible measurement mistakes well enough to quote an uncertainty for them, are they really systematic errors?
I always thought if you accelerate a particle with mass to the velocity of light, the particle would become infinitely massive and therefore this was said to be impossible.Time is said to slow down for these accelerating particles,Iwonder what this effect is on the experiment?
Nikola Tesla the “father of free energy” as also the discoverer of the neutrino reported in 1932 that neutrinos are small particles, each carrying so small a charge and they travel with great velocity, exceeding that of light.
Experimental tests of Bell inequality have shown that microscopic causality must be violated, so there must be faster than light travel. According to Albert Einstein’s theory of relativity, nothing with nonzero rest mass can go faster than light. But zero rest mass particles can go faster than the light. Neutrinos have a small nonzero rest mass. Faster than light interactions are a necessity and they provide the non local structure of the universe. In any physical theory, it is assumed that there is some kind of nonlocal structure violates causality. If neutrinos are traveling faster than light, then neutrinos must be on the otherside of the light barrier going backwards in time, where the future can interact with the past.
There are lots of theories and research regarding this matter including Cherenkov radiation, Standard Model Extension, Heim theory, Novikov selfconsistency principle, Casimir effect, Hartman effect, Casimir vacuum & quantum tunnelling, Tachyons, etc.
– Nalliah Thayabharan
They also had to deal with time delays due to the finite time signals need to propagate through electronic equipment. So, you detect something and you want to give that a time tag using a clock, but it takes time for that time tag to arrive from the clock. This is relevant because the time delays at the CERN and LNGS setup are not the same.
In the article they describe how they measured this, but perhaps there can be some effects here that were missed.
Mary, the principles behind that are different than for this neutrino experiment. Also, it seems the pulses really were “detected” in a shorter interval than t = s/c. But yes, note that any “event” has a finite interval. Still, let’s do appreciate, these people are not idiots and have been thinking about such caveats, OTOH I saw some impressive proposals of error in comments at SWAB.
Re: Mary at #24:
Very unlikely for a couple of reasons.
1) To get appreciable changes in group velocities you need a significant change in the index of refraction or its derivative. Because neutrinos interact so weakly with matter, it seems implausible to change n by an appreciable amount, despite the high matter densities. Perhaps such an effect could be seen with an extremely monochromatic neutrino beam on resonance with some nuclear excitation.
But even if that were the case, I don’t think it would explain the data, because
2) Unlike the faster-than-light light pulses (which have a group velocity > c, but don’t transmit information faster than c), the creation of the neutrinos does have a “start time”/”sharp rising edge” (the pulse of particles that produces it), so this is an information propagation issue.
“One effect I haven’t seen anyone mention yet is that they need to take into account the earth’s rotation during the time the neutrinos are in flight.”
This is something I wondered about, but even more than this — what about the movement of the earth through space? Or the movement of the solar system itself? I don’t know nearly enough about particle physics to know if these subatomic tiny-mass particles are affected by inertia or not.
If c is an absolute ‘speed limit’, what happens when two objects (or particles or whatever) are moving towards each other, really fast but not near the speed of light, but more than half of the speed of light, so that their combined relative velocity to each other is greater than the speed of light? Like if you’re travelling at 30km/hr and a car in the other direction is travelling at 50km/hr, you’re approaching each other at 80km/hr. What would the effect be if the total approach speed is greater than c?
heather@31 what about the movement of the earth through space? Or the movement of the solar system itself?
One of, if not *the*, key point of special relativity is that the speed of light (in vacuum) is c, regardless of your motion. That’s what all the funky time dilatation and length contraction which relativity is associated with is about – messing with the fabric of space-time so that the speed of light, any light (technically any (rest) mass-less particle in vacuum), is c. I’m not enough of a GR wonk to know if that’s substantially changed by the edge cases of General Relativity, but nothing I’ve heard indicates it would be. So if Einstein is right, there’s no combination of movement that would cause a particle to appear to travel faster than the speed of light when it “really” wasn’t.
ken jones@26 I always thought if you accelerate a particle with mass to the velocity of light, the particle would become infinitely massive and therefore this was said to be impossible.
That brings up a whole ‘nother kettle of fish. If these results are correct, we can’t be certain that the neutrino has mass. To the best of my knowledge, we’ve never directly measured the rest mass of the neutrino. The best we’ve done is seen that it can flavor switch, and inferred that it must have a mass, as massless particles, by necessity, travel at the speed of light and thus don’t “see” time. (As you approach the speed of light your internal clock slows down, so at the speed of light it stops.)
If some neutrinos aren’t traveling at the speed of light, they’ll have time in which to do their flavor switching, even if they’re (rest) massless. I don’t know, though, if the flavor switching correlates with energy like this superluminal travel seems to.
That said, all bets are on this being a (subtle) measurement error, rather than new physics. (But if you ask me, the place to look for new physics is with the behavior of neutrinos – they’re odd little ducks.)
Having worked with industrial installations where the construction plans and as-built measurements often vary by many meters, that was my natural suspicion. The rule is that the only people ever concerned with the original plans once the facility is built should be the lawyers.
That error estimate and the use of the term “benchmark” implies they’ve done the necessary surveying and measurement. Obviously, they’ve done the appropriate as-built measurements at CERN as well.
Anne, @20:
They track the satellites and periodically update the clocks on the satellite using relativistically derived corrections (see description at http://www.astronomy.ohio-state.edu/~pogge/Ast162/Unit5/gps.html). The needed corrections over a day are quoted as adding up to over 38 microseconds out of sync with the ground due to general relativity (i.e., orbiting higher up in the earth’s gravity well). The distance error provided is about 10km per day without the corrections.
This is why I said they would needed to derive some sort of interpretation framework that does not rely on relativity yet still would keep the basic time and location data they are using valid. Without relativity, all they have is bits on file in a computer, not times and locations on the ground.
They indeed do a traverse survey from outside to inside, as described in their OPERA public note on geodesy. I agree with Chad there’s no way you miss the distance from external monuments to the detector by 20m.
In principle you could measure everything to very high accuracy in the survey coordinate system, but then misalign that system to the GPS coordinate system. It would take about a 2 degree misalignment (about the vector pointing towards CERN) which I also think is basically not a possible misalignment size for a careful survey.
Chad,
Can you explain that graph a bit more. You say it’s a correction for the TOF, but the TOF is much more than 1048ns. Also, the paper says something about the TOF for the neutrinos being 987ns, which is how they get the 60 ns difference, but I don’t see how they get that measurement from the graph. Is there a level of disagreement between the proton signal and then neutrinos in the bottom graph that we can’t see?
Can you explain that graph a bit more. You say it’s a correction for the TOF, but the TOF is much more than 1048ns. Also, the paper says something about the TOF for the neutrinos being 987ns, which is how they get the 60 ns difference, but I don’t see how they get that measurement from the graph. Is there a level of disagreement between the proton signal and then neutrinos in the bottom graph that we can’t see?
I believe that this is because they assumed some travel time as a base, which did not include all of the experimental effects, and then calculated a correction from that. That’s why it’s such a big number.
(This might be an artifact of the “blind analysis” that they did. That is, they did an analysis of their data using a deliberately wrong value of the overall time, with the actual value used hidden by a computer. This avoids having the people doing the analysis try to slant things so as to favor a preferred answer. Only after they’ve nailed down all the systematics on the wrong value do they “open the box” and put in the correct number.)
I don’t recall exactly what figure was used as the base, and I can’t look it up right now because I need to teach in 20 minutes.
@26 “Time is said to slow down for these accelerating particles,Iwonder what this effect is on the experiment?”
Yes, but isn’t that the time on the neutrino’s ‘on-board clock’? Not the observer’s clock in the lab.
One point I think is really worth pointing out is that the actual shift between best fit waveform and waveform with all corrections is about 6% (60/1048) of the shift shown in Fig. 11. That’s about 1/5 the width of the dots for the data (as measured really roughly by zooming in on the figure).
What worries me is that I saw no discussion in the uncertainty of the waveform they fit the data to in the arXiv paper. Maybe any such error is negligible, but I’d like to have that mentioned. It seems like they assume that the sum of the BCT proton waveforms, properly normalized, is the same waveform that they should get in their neutrino detector. There are several steps I could see some error come into this:
1) How well does the proton extraction structure match the BCT proton waveform?
2) How well does the proton extraction structure match the neutrino beam at Gran Sasso? (any change in form due to decay process, dispersion on the way to detector, etc?)
3) How well does the neutrino detector resolve the neutrino beam waveform?
4) What is the statistical error due to summing procedure of the BCT waveforms?
I especially worry about this since the waveform is approximately a square wave, so the quality of fit is likely dominated by the leading and trailing edges (a point brought up in the CERN talk Q&A).
Although it may be working Einstein’s Theory of Relativity somewhat backwards in order to check the results of CERN, if one were to calculate the energy required for a neutrino to travel at the speed of light by plugging in values for the mass of a neutrino multiplied by the speed of light squared, and account for an excess amount from CERN results, that would suffice/satisfy the energy requirement for a neutrino velocity to exceed the speed of light. Mathematically, this is possible, but perhaps based upon shakey measurements of the mass of a neutrino at 9.11 x 10^^(-36)kg.
Wasn’t there an experiment some time in the 1990s or thereabouts that tried to measure the square of the neutrino rest mass and got a negative value? That would mean an imaginary rest mass, which would imply that neutrinos are tachyons. I once asked about it and was told that it was a case of experimental error. OK, but was the cause of the error identified?
Mary @24, wild guess of a pure math guy, if the result is true, maybe “each” neutrino is a pulse, in the sense that neutrino velocity is a pulse velocity (they have structure).
Tachyons is an mathematical Imaginary particle that may move faster then Photons (Light particles) in the universe and yet to be discovered
*** Mr. Rupak Bhattacharya-, of residence 7/51 Purbapalli, Sodepur, Dist 24 Parganas(north), Kol-110,West Bengal, India *Professor Pranab kumar Bhattacharya- , Now Professor and Head of Department of Pathology, and of WBUHS, Calcutta School of Tropical Medicine, C.R avenue; Kolkata-73, West Bengal, India*Miss Upasana Bhattacharya-, only daughter of Prof. P. K Bhattacharya ***Mr.Ritwik Bhattacharya , *** Miss Rupsa Bhattacharya ***Mr Soumyak Bhattacharya of residence7/51 Purbapalli, Sodepur, Dist 24 parganas(north) ,Kolkata-110,WestBengal, India , **** Mrs. Dalia Mukherjee , Swamiji Road, South Habra, 24 Parganas(north) West Bengal, India**** Miss Oindrila Mukherjee-Student ,**** Mr. Debasis Mukherjee of Residence Swamiji Road, South Habra, 24 Parganas(north), West Bengal, India*****; Dr. Hriday Ranjan Das Dept of Nephrology, IPGME&R, 244a AJC Bose Road *****Mr. Surajit Sarkar, , Dept of Pathology, IPGME&R, Kolkata-20.
What were most elementary particles in universe? According Prof. M. Gelman- , earliest particles were quarks& anti-quarks. Gospel of Big Bang was then supposed to have been inflation from zero volume at zero time, zero space of a corpuscle containing cosmic soups of these quarks & anti quarks particles, where in corpuscle, energy were equivalent to mass, radiation & flash. Particles and anti particles were in constant annihilation and went into radiation, flash. What authors mean, at trillion & trillion degrees temperature (about 1015K) particles &antiparticles were in constant annihilation and were again created, although total energy of elementary particles and radiation was just interchangeable. In primordial fireball or in cosmic soup, combined radiation and matter of the soup was constant. However ,in QCD another particle proposed as earliest particles in universe. It was Madsen & Mark Tailor, gave concept of these particles in primordial universe. Name of their particles is âNeutrinosâ. Neutrinos were also non-Zero mass particles according authors and many, though in standard teaching, it is mass less. There are broadly three (3) species of âNeutrinosâ. I) Electron neutrinos 2) Muon neutrinos 3) tat neutrinos and 4th variety we say Missing Neutrinos. During first half of twentieth century, physicists were convinced that all stars including our Sun, shines by converting, deep in its interior, hydrogen into helium. According ,this theory, 4 hydrogen nuclei called protons (p) are changed within solar interior into a 4He nucleus, 02 anti-electrons (e+, positively charged electrons), and 02 elusive and mysterious ghostly particles called neutrinos . This process of nuclear conversion, believed responsible for sunshine and therefore for all life on Earth. Conversion process, which involves many different nuclear reactions, can be written schematically as: 4pâ4He +2e+ +2ve —-[1] as Bhattacharya Rupak wrote it once in 1995. I.e, two neutrinos produced each time as the fusion reaction (1) within star. Since 4 protons are heavier than a helium nucleus, two positive electrons and two neutrinos, reaction (1) releases lot of energies to Sun, that ultimately reaches earth as sunlight. The reaction occurs very frequently. Neutrinos escape easily from Sun their energy does not appear as solar heat or sunlight in earth. Sometimes neutrinos are produced with relatively low energies and Sun gets lot of heat. Sometimes neutrinos are produced with higher energies and Sun gets less energy. Neutrinos have zero electric charge, interact very rarely with matter, â according to particle physicâs very high reference level textbook version of the standard model of particle physics â they are mass less. About 1000 billion neutrinos from Sun pass through your thumbnail every seconds, but you do not feel them because, they interact so rarely and so weakly with matter. Neutrinos are practically indestructible; almost nothing happens to them. For every hundred billions solar neutrinos passing through Earth every seconds, only about one interacts if at all with stuff of Earth is made. Because they interact so rarely, neutrinos can escape easily from solar interior, where they are created and bring direct information about solar fusion reactions to us on Earth. There are three known types of neutrinos already told. Nuclear fusion in Sun produces only neutrinos that are associated with electrons, the so-called electron neutrinos . The two other types of neutrinos, muon neutrinos and tau neutrinos , are produced, for example, in laboratory accelerators or in exploding stars, together with heavier versions of the electron, the particles muon and tau . But there are some missing neutrinos too. All accepted models in cosmology & in particle physics however accept that neutrinos are mass less or so. But some idea that neutrinos might have mass also was about 40 years old. The successful unification of the weak and electromagnetic force field implied that there should be as many as kinds of neutrinos, as there are different kinds of electron like particles. There is till no confirmed mass evidences that neutrinos have a non zero mass (Bhattacharjee Rupak and Bhattacharya Pranab Kumar)- The heaviest neutrinos in Gev temperature ranges from à to r electron volts. But the scientists found that this wooly mammoth allegedly carries also a mass of 17,000 electron volts (kev). By radioactive beta decay process- process in which an unstable nucleus in radioactive isotopes emits both an electron and a neutrino, of decay of electrons. Rupak & I recorded the energy of decay electrons by sending them into a crystal where they knock other electrons creating a current that provided a measure of energy where a big 17Kev regularly appeared, taken from the energy of a few electrons. The energy was then obvious 17 Kev neutrinos and 1% of their emitted neutrinos belonged to heavy neutrinos. Neutrinos can pass through entire Earth almost near or at speed of light without leaving a trace and it is immune to many of forces that bind matter including electromagnetic forces. But obviously faster than speed of light? They have almost never been observed outside the controlled environment of big accelerator laboratories of USA &CERN in Europe. Neutrinos are even more common in universe then photons (light particles), only because probably Big Bang left a sea of very low energy neutrinos that permeated every corner of this Cosmos. In 30th march 2006 from the US laboratory â Fermi labâ reported first result from a neutrinos experiment Called âMINOSâ( Main injector neutrino Oscillation search) in Soudan mine at a depth of 776 meter in minnestoa 732 Km away. The MINOs experiment showed that there is a short fall in the number of muon neutrinos, if they are detected a long distance away from their point of production, may be called âMissing Neutrinosâ- as we told earlier as 4th category. Solar neutrinos have multiple personality disorder. They are created as electron neutrinos in Sun, but on way to Earth, they change their type. For neutrinos, origin of personality disorder is a quantum mechanical process, called “neutrino oscillations .Lower energy solar neutrinos switchs from electron neutrino to another type as they travel in vacuum space from Sun to Earth. The process can go back and forth between different types. Numbers of personality changes, or oscillations, depends however upon neutrino energy. At higher neutrino energies, process of oscillation is enhanced by interactions with electrons in Sun or in Earth. Stas Mikheyev, Alexei Smirnov, and Lincoln Wolfenstein first proposed that interactions with electrons in Sun could exacerbate personality disorder of neutrinos, i.e., the presence of matter could cause the neutrinos to oscillate more vigorously between different types. But the standard model of particle physics assumes – neutrinos are mass less, what we authors could never follow .In order for neutrino oscillations to occur, some kinds of neutrinos must have masses- some may not have mass. Neutrinos are elementary particles where all neutral counterparts of charged leptons namely, electrons, muons and Å£ leptons, all of which take participation in weak interactions. Determination of neutrinos particles still remain notoriously difficult from the point of view of experiments and got challenges in particle physics of highest depth research. At this moment, there is no information of even values of their individual masses. We authors however proposed their value as m1<3ev;ml<190Kev; mj<18.2 Mev may be the mass of different muon neutrinos numbers. It is worth noted that direct detection of VÄ´ was reported in 2006 for the first time only from Fermi laboratories USA. The presence of neutrino oscillation in 2006 march experiment by Fermilab .Direct Observation of NUTAU E872[DONUT] experiment, or GALLEX and SAGE Experiments implies existence of distant & non vanishing mass for neutrinos flavors and missing neutrinos. So most neutrinos must have a non-zero mass. For electron neutrinos- mass is 10-6ev. A mass in excess of 1ev would then be significant since neutrinos would then contribute mass than stars ( Stars like sun) to the mass density of universe. The universe would be then closed if mass of neutrinos would be between 25 and 100 eV. So 1) âElectron Neutrinosâ had a mass of 20ev, 2)âMuon neutrinosâ had a mass of 0.5Mev and 3) Tat neutrinosâ had a mass of 250 Mev. Electron neutrinos constituted about a third of the total number of neutrinos. Most of neutrinos produced in interior of Sun, all of which are electron neutrinos when they are produced, are changed into muon and tau neutrinos by time they reach Earth. In QCD, studies suggest that primordial universe was dominated by neutrinos of non-zero mass rather then by quarks with itâs color. A natural scale then emerged determined by maximum distance neutrinos that could stream freely as universe expanded, before neutrinos slowed down on account of their mass below the scale of super cluster i.e. galaxies formation. In this neutrinos theory then no pre- existing fluctuation then survived and the first structure then collapsed and formed galaxies.
Faster then Light particles Tachyons - are these missing neutrinos? OPERA neutrino experiment in 2011at underground Of Gran Sasso Laboratory, measured velocity of neutrinos from CERN protons generated neutrinos beam over a baseline of about 730 km with approximate 16000 neutrino interaction events detected by OPERA at higher accuracy than previous studies conducted with accelerator neutrinos, though the distance was too short. To perform first detection of neutrinos oscillations in direct appearance mode in the vu-vï´ channel. G. R. Kalbfleisch in 1979,[ Phys. Rev. Lett. 43, 1361 (1979); showed the muon neutrino speed |v-c|/c < 4Ã10-5, at ~30 Gev at high energy, when MINOS experiment reported (v-c)/c = 5.1 ± 2.9Ã10-5,however at lower neutrinos energy 10 Mev . OPERA experiment was done with only Muon neutrinos generated from hadron. [400 GeV/c with the CERN Super Proton Synchrotron (SPS). was the CNGS beam used here was the purest vu beam with at energy of 17 GeV, optimized for vï-vï´ for neutrinos oscillation study. What was about the anti neutrinos contaminations in those tube?. what was the exact place and exact time that the neutrinos were produced if from mesons in figure -5 of OPERA? How much uncertainty remains with neutrinos δt and while commuting δt for photons ? The relative difference of the muon neutrino velocity with respect to the speed of light in OPERA :(v-c)/c = ï¤t /(TOFâc - ï¤t) = (2.48 ± 0.28 (stat.) ± 0.30 (sys.)) Ã10-5, The results of the study indicate for CNGS muon neutrinos with an average energy of 17GeV an early neutrino arrival time with respect to the one computed by assuming the speed of light in vacuum :t = (60.7 ± 6.9 (stat.) ± 7.4 (sys.)) ns Question remain whether there is any particle moves faster then speed of photon particles? We authors consider it is possible through another particle called âTachyons Particlesâ, detected in 1974 by Roger Clay and Ohilip crouch of Adelaide University in Australia. What were Tachyon particles? Of course, the Super string theories that evolved from spinning string theories, which incorporated supper symmetry and had no Tachyonic ground states. Tachyons are still mathematical quirk of mathematicians with no physical meaning. Can these tachyons be the missing Neutrinos particles with real zero mass? However Einsteinâs equation E=mc2 says âthat nothing in this observable universe, can cross the speed of photon [light particles]â. But tachyons have probably that curious property of going faster then speed of light, as the particle mast loose energy, unlike other ordinary particles. It is still probably unknown, whether within relativity theory (E=mc2] solutions of Einstein, permit also two families of particles to exist -1) which always have a speed less than light and 2) other which always have speed greater than the light. If it permits the second one, then the later particle must be tachyons or a kind of neutrinos whom we do not know yet or called âmissing neutrinos with zero massâ. Physicists till date do not understand how neutrinos behave when they travel an astronomical distances. The neutrinos behave differently than physicists had assumed so far. If tachyons really exist then many of our normal physical laws, laws of this universe are to be reversed. The standard description of two families of particles allowed by Einstein equations follows from the requirement that the total energy of a particle is given by a formula ------ M0C2(1-(v/c)2)1/2. The key point being that taking the square root (half Power) introduces two families of solutions. For zero velocity, of course the expression reduces to mc2. Square root of negative numbers although allows mathematically , do not have physical significance and obvious interpretations of this expression to give real total energies is the term (1-(v/c)2, must therefore be positive or at least zero, so that âvâ is always less than or equal to âcâ and particles can never travel faster than light. But there may be other ways to think also. Possibility with, imaginary mass (where I is the square root of -1). In that case the situation will be reversed and in order to obtain a real energy, we must take another square root of a negative number in order that the imaginary . âIâs multiply out to-1. In other words for imaginary masses, âvâ must exceed here âcâ, so that (i-v/c)2) is always negative. This is the origin of Tachyon But suppose, we allow âvâ to exceed âcâ while maintain the real mass âmâ. Now we are taken into very strong realms-the imaginary part of space time. Might we consider a tachyon particle with imaginary zero mass moving through the real part of space time at a speed greater than that of light. Tachyons can then provide the link between past and future and possible time travel.
actually if the universe is not lorentz covariant, but galilean invariant there is no violation fo causality. i’m not saying this is the case, but just pointing this put.
Thanks for this explanation — it’s the best I’ve seen online so far!
Side comment: You can’t use this experiment to “toss out relativity” because it relies on relativity (both special and general) when it uses GPS to measure the baseline. Unless you argue that the baseline is wrong because the GPS signals are designed to hamper inter-continental use.
ken @26:
Mass is invariant. The “mass increase” was an attempt to keep momentum = m*v rather than admit that galilean Newtonian mechanics was flawed. You can’t use the so-called relativistic mass for anything other than m*v: it fails in KE = 1/2 mv^2, gravity, and any other place you use mass in conventional Newtonian mechanics.
Further, nothing in SR forbids a particle traveling faster than c. It forbids crossing from vc. The problem with tachyons is figuring out how they interact with normal matter. That is a modeling problem, not a relativity problem. I’m old enough to remember that tachyonic neutrinos were one of many random theories that pre-dated the standard model, temporarily revived by the hints of the experiment mentioned @41. (Better experiments excluded that mass value but never really excluded negative mass unless you built m>0 into the fit procedure.)
RM @32 makes a common, yet totally incorrect, claim about special relativity. Special relativity, including results of measuring the speed of light, does not apply to any motion. It only applies within inertial coordinate systems. Rotation of any kind, including of the earth and its solar system and its galaxy, requires a more careful treatment. This is well known. The device called a ring laser gyro uses the Sagnac effect (which is the apparently faster transit of light one way around a ring that is in a non-inertial frame) for navigation. It is supposed to be accounted for in long-distance clock synchronization on earth.
RM @32 makes the very important point that the only evidence for a non-zero neutrino mass comes from flavor oscillations (which this experiment was trying to measure!). The mass is required by the standard model for this process, where the mixing of flavors requires a mixing of particles with different masses. Lots of people don’t like the plethora of arbitrary parameters (9 numbers in the CKM matrix, for example) required by the standard model, but so far it is the only thing that works. So far.
heather @31 asks a question about velocity addition that should be answered in Chad’s new book.
From a general credibility standpoint, this doesn’t cut it. We know these things are measured within bell curves, and we’ve seen how you can hide things at the front (or back) of such a curve. Since their finding could be hidden within the front of the curve, then it’s automatically dubious. Far more convincing would be a finding that puts their neutrinos at a velocity outside the possible curve. Until we see that, then it’s probably just a matter of doing the grinding work to find the mechanism of their error.
What is the time dilation due to the earth’s gravitational field? If the normal gravity-affected clocks do not apply to neutrinos, they can arrive sooner than light because their clocks are running faster.
AND of course the transit time for neutrinos from SN1987A, which went through low gravity interstellar regions, will not be affected (to anything like the same degree) by this.
(Yes, I realize this is a major hole to open up in general relativity. I’m just freewheeling.)
Joke, from Rosemary Kirstein’s blog:
The barman says “We don’t serve faster than light neutrinos in here.” A neutrino walks into a bar.
This might be a silly question, but why do they even bother matching up the production/detection curves of neutrinos?
Wouldn’t a much simpler way of analysis be just to time the start of the proton pulse accurately, then calculate the expected time a particle would arrive if traveling at c, then see how many neutrinos they detect that arrive faster than that?
I assume this is because they weren’t specifically looking for superluminal neutrinos, correct?
Also, I would love it if you made a post explaining more in detail how exactly clock synchronization assisted by GPS works, seems like a fascinating topic and I had no idea you could get nanosecond accuracy like that.
#49 – Wouldn’t the neutrino have already taken the drink and gone bfore the Bartender saw the door open ?
Slw, sure those are good questions. I’m sure the claimants thought about such things too. We shouldn’t be hard on them AFAICT, OPERA crew was basically saying “we got this very interesting result from making calculations according to what seemed reasonable to us, we know we could be wrong and that’s what everyone is going to hash out.” It’s not like they insisted (tre?) and would be shocked or embarrassed by finding out that factors such as you describe mean the neutrinos (tachtrinos as I have dubbed them) were not really superluminal.
Also, if the tachtrinos are FTL then of course they cannot manifest their mass in traditional ways, UNLESS as some have noted photons themselves don’t really go at the full physical limit c, as some suggest. IOW, the real value of “c” for physical limitation, time dilation etc, might be a tiny bit bigger than what we measure, due to light being slowed by a sort of syrpy space etc. However, it seems to me there would be some dispersion which is not observed, and calculations of e.g. γ factor for very high energy particles would be off enough to be noticed etc. Thoughts?
Temperature controlled spurious frequencies for Generation and degeneration:
The internal energy of a system is expressed in terms of pairs of conjugate variables such as temperature/entropy or pressure/volume. In fact all thermodynamic potentials are expressed in terms of conjugate pairs. The enthalpy phase conjugation between pressure and volume and entropy between temperature and viscosity as the temperature and pressure kept constant as isometric and isobaric combinations reflects in entropy and volume. The influence of fluid viscosity on the entropy generation rate is investigated in the pipe flow at different wall temperatures. The temperature and flow fields are computed numerically using the control volume method. It is found that fluid viscosity influences considerably temperature distribution in the fluid close to the pipe wall. In this case, the high temperature gradients extend further towards the pipe center for constant properties case. On the other hand, variable properties reduce the size of the region, where the high temperature gradients occurs in the flow field. Entropy contours follow almost the temperature contours.
Magneticfield controlling the temperature by Peltier effect contributes reversible temperature compensation in the system. The volumetric reflections by a decreasing order may reflect in entropy of the system also as converging configuration and a sudden condensation at the Bernoulli nozzles formed by the magneticfield is possible as mass transfer takes place between pressure increased and decreasing pressure and alternate pressure increase and decrease in contemplated in between positive and negative pressure.
Near the resonant frequencies the induced polarisation will become very large. A combination of circular and linear polarisation repetitions may induce faster reactions. This new material of frequency varied vaporization could frequency phase conjugated to produce an amplified energy gain a part of Bernoulli theorem applications in 45 degree converging nozzles.
A sound wave represents an adiabatic pressure change progressing periodically in space and time. In water, as a result of the density maximum at 4ºC, the simultaneously appearing temperature wave is very small in magnitude compared to the pressure wave. Thus, in the present case essentially we have to consider only the effect of the pressure change on the chemical equilibrium.
. A chemical equilibrium is always pressure-dependent whenever the reaction partners (in equilibrium with each other) differ in volume. When this is the case, a pressure change will induce a chemical excess reaction which takes place at a finite rate and leads to adaptation to the particular equilibrium state concerned. If the periodic pressure change takes place very rapidly in relation to the chemical reaction, the system will practically not “notice” these changes: the rapid positive and negative disturbances average out before the onset of any appreciable reaction.
On the other hand, if the pressure change takes place very slowly compared to the chemical reaction, the system follows these changes with practically no lag. The sound then merely propagates at a slightly lower velocity, for the compressibility of the medium contains a contribution from the state of the chemical equilibrium (cf. Fig. 1b). Now, the interesting case is that in which the rate of re-establishment of equilibrium is comparable to the rate of the pressure change (i. e. when the time constant for the establishment of chemical equilibrium is of the same order of magnitude as the period of the acoustic wave). In this case the system tries to adapt continuously to the pressure change but does not quite succeed, so that it lags behind the pressure change by a finite phase difference. The chemical state is characterized by the concentrations of the reaction partners or the reaction variable. Because of the finite volume difference between the reaction partners in equilibrium, a volume increment characteristic of the chemical change follows the pressure change with a certain phase lag. In all fields of physics where there is this kind of phase difference between “conjugate” variables there is a transfer of energy (in this case a reduction in the amplitude of the sound waves).
For a finite phase difference, the integral JPdVis different for the compression and dilatation periods. It was very quickly found that the absorption could not be caused solely by the interaction between could a simple inter-ionic interaction be the explanation, either in terms of the Debye- Hückel ion clouds, for which we would expect a broad continuum of absorption at high frequenciesII, or in terms of ionic association as described by Nernst12a or Bjerrum12b, which should give a single absorption maximum. In short, it appeared that there was an interaction between magnesium ions, sulphate ions, and water molecules in the form of a sequence of linked reactions.
Even if the series resistances at the spurious resonances appear higher than the one at wanted frequency a rapid change in the main mode series resistance can occur at specific temperatures when the two frequencies are coincidental. A consequence of these activity dips is that the oscillator may lock at a spurious frequency (at specific temperatures). This is generally minimized by ensuring that the maintaining circuit has insufficient gain to activate unwanted modes.
New Aerodynamic propulsion systems:
Citation: Propulsion systems out of Selective metamterials phase conjugated in between positive and negative refractive index by frequency selective electromagnetic resonance may accumulate energy storage clockwise anticlockwise spiral vortices that may be used to propel a aerodynamic space vehicles has been evaluated by CRERC/C.I.T Project coordinator Sankaravelayudhan Nandakumar of new energy research centre formed as guided by Chairman Krishnanpillai.
Near the resonant frequencies the induced polarisation will become very large. A combination of circular and linear polarisation repetitions may induce faster reactions in these materials and possible fast combustion system could be induced
This new material of frequency varied vaporization could frequency phase conjugated to produce an amplified energy gain a part of Bernoulli theorem applications in 45 degree converging nozzles.
A sound wave represents an adiabatic pressure change progressing periodically in space and time. In water, as a result of the density maximum at 4ºC, the simultaneously appearing temperature wave is very small in magnitude compared to the pressure wave. Thus, in the present case essentially we have to consider only the effect of the pressure change on the chemical equilibrium. A chemical equilibrium is always pressure-dependent whenever the reaction partners (in equilibrium with each other) differ in volume. When this is the case, a pressure change will induce a chemical excess reaction which takes place at a finite rate and leads to adaptation to the particular equilibrium state concerned. If the periodic pressure change takes place very rapidly in relation to the chemical reaction, the system will practically not “notice” these changes: the rapid positive and negative disturbances average out before the onset of any appreciable reaction.
On the other hand, if the pressure change takes place very slowly compared to the chemical reaction, the system follows these changes with practically no lag. The sound then merely propagates at a slightly lower velocity, for the compressibility of the medium contains a contribution from the state of the chemical equilibrium (cf. Fig. 1b). Now, the interesting case is that in which the rate of re-establishment of equilibrium is comparable to the rate of the pressure change (i. e. when the time constant for the establishment of chemical equilibrium is of the same order of magnitude as the period of the acoustic wave). In this case the system tries to adapt continuously to the pressure change but does not quite succeed, so that it lags behind the pressure change by a finite phase difference. The chemical state is characterized by the concentrations of the reaction partners or the reaction variable. Because of the finite volume difference between the reaction partners in equilibrium, a volume increment characteristic of the chemical change follows the pressure change with a certain phase lag. In all fields of physics where there is this kind of phase difference between “conjugate” variables there is a transfer of energy (in this case a reduction in the amplitude of the sound waves).
For a finite phase difference, the integral JPdVis different for the compression and dilatation periods. It was very quickly found that the absorption could not be caused solely by the interaction between the Mg2+ and SO4 2- ions and the water, for neither magnesium chloride nor sodium sulphate dissolved on their own produced comparable effects. On the other hand, neither could a simple inter-ionic interaction be the explanation, either in terms of the Debye- Hückel ion clouds, for which we would expect a broad continuum of absorption at high frequenciesII, or in terms of ionic association as described by Nernst12a or Bjerrum12b, which should give a single absorption maximum. In short, it appeared that there was an interaction between magnesium ions, sulphate ions, and water molecules in the form of a sequence of linked reactions.
This ionic fast reactions could be used in Xenondiflouride-bismauth metamateril a vaporising sytems that can be generated in converging nozzles.
Structure of a tapered fiber consisting of an SBS generator and an SBS amplifier connected with a taper structure
Beam propagation of incident beam in the taper region of a tapered fiber.
power reflectivity in the amplifier part, meaning the large-diameter part of the fiber. So far, the damage problem of the fiber will be released by a factor of 25. This type of fiber showed a dynamic range of 1:260, and so far it can be used over a range of 1:200 with very high reflectivity above 90% as shown in Fig. 2.26. The measured fidelity was above 90% over the whole dynamic range. Because of the short length of this tapered fiber below 1 m, the polarization of the incident light was conserved for the reflected light. Therefore, this type of optical phase conjugator can be used in double-pass amplifier schemes using a polarizing element to take out the phase conjugated SBS signal after the second pass through the amplifier system.
The oscillator emits a low-energy beam with a diffraction-limited quality. It is then amplified by the gain medium operating in a double-pass configuration. Due to the conjugate mirror, the returned beam is compensated for any aberrations due to the high-gain laser amplifier. A diffraction-limited beam is extracted by 90 degree polarization rotation. So, according to these remarkable properties, it is expected that we can realize a new class of high-power and high-brightness phase conjugate lasers delivering a beam quality that fits the requirements for scientific and industrial applications.
Compensation of the aberrations due to a phase distorting media by wavefront reflection on a phase Conjugate mirror.
Royal society meeting-New finding on optical wave converted as gravity waves at certain deBrogle reflective angle of 45 degrees by Cape Renewable Energy research Center Cape Institute of Technology-reg
Histroy:General relativity predicts that massive astrophysical objects in motion emit gravitational waves. There are indirect signs that this prediction is correct, such as the âspin downâ in energy of pulsars, but a more direct test is to detect the waves with interferometry. Numerical simulations of the expected gravitational wave signatures for various events are useful guides to these experimentsâthe challenge is that these computations are tricky and require huge processing power. In a paper in Physical Review Letters, Carlos Lousto and Yosef Zlochower of the Rochester Institute of Technology, US, report their progress in generating gravitational wave forms for pairs of black holes as they orbit each other and merge.
The most promising black hole binaries for gravitational wave detection have mass ratios around m1/m2~1/100, but so far calculations have been limited to ratios around 1/15. To break this barrier, Lousto and Zlochower carry out a fully nonlinear calculation with improved numerical techniques and a modified gauge (which is related to how the spacetime coordinates are treated). After 1800 hours of computation with 768 processors, they obtain wave forms for the final two orbits of a binary system with mass ratio of 1/100 before the smaller black hole plunges into the larger one. The ability to calculate signatures for black hole binary mergers for these more extreme mass ratios should enable the large gravitational wave detection collaborations to better understand what they might be seeing.
Optical waves could be used to induce a new gravity force:
Cape Institutes Project coordinator has discovered something new to offer to the scientific society, based on the Project Coordinator Sankaravelayudhanâs discovery .really it is surprising and rather astonishing. He is working under Dr.Ramalingam of Cape Renewable research center along with research team under Dr.Azhagesan ,Principal C.I.T.
Light emitting nano crystal activities photosensitive of modern nano technology could be studied based on Steigewald and Brus1990.But no investigation has been carried out so far temperature dependence luminosity further by Lee et al 2000.But the study on photosensitive medium typically enhancing sending water affined gravity waves at the criticality of conversion as gravity waves has no t been studied so far. The optical reflective property inheriting a strange force of moving an object or having twist reactive z dynamical capacitive twisters using deBrogle matter wave requires an investigation. Some information available in Tornado Bossonova dynamics and energy amplification gain has to be further applied with reference to Bernoulli flow of middle z dynamics upward or downward force as twisters may be able to move an object even.
Electrons collected at 45 degree clusters: Electrons collected at the Lunar planetary boundary producing de Brogle induced optical gratings with reference to 45 degree electron valley collections that reflects solar rays typically. At full moon days water affined gravity inheritance on new moon and full moon days. At this particular reflections inheritance is one of matter waves that may collect solids, water particles as the case may be, based on the formula 2d sinï±ï½nï¬ï®
Optical waves could be converted as gravity waves: Perhaps we are on the verge of a new finding that optical waves could be induced to form gravity force .This means brain power could be used to lift or move an object based on neuron emissions of Psychic nature. This finding could be use in future Robotic Applications.
This is rather an astounding interpretation that at certain Bragg reflected conditions say 45 degrees electron clusters and interference this could act as gravity waves as observed in Lunar surface, due to optical waves interchanged as matter waves. This if under dark states enhancing a high velocity and such an oscillation and resonance seems to be very important as an extension of Einsteinâs photo electric effect.
Hallâs quantized magnetic field observed: deBrogle optic lattices of lunar boundary gratings for 45 degree angular electron valley convergence to deal with water affined waves under the action of quantized magneticfield that act as frequency shifting slots with voltage input among the grate valley reflections in amplified production of super gravity waves that could be used in artificial intelligent robotic optics using PWM modulation that may really be happening in lunar surface.
It is here that the optonic wave surprisingly transferred as gravity waves. Here the deBrogle matter wave is Bragg reflected by the spatially periodic optical field. The matter and optic field have interchanged from the usual case of Bragg reflection of an electromagnetic field by crystalline accumulation by photon induction forming new plane of crystalline planes. In between the high velocity states is operative the point at which deBrogle wave fields propagate without scattering at blackhole quantum dots by induced dark states of intermittence. A combination of deBrogle scattering with intermediate dark matter reinforcement without scattering under high velocity recoiling and in fact Bernoulliâs energy transfer dynamics can also be applied partially in between velocity and pressure with quantum mechanical mix-up.
The effect of boundary reaction temperature on optical properties light sensitive lunar surface requires further investigation. Then we have to concentrate on low reaction temperature in forming a high quality crystalline structure forming High resolution transmission spectral microscopic structures. Possibly this could be applied in Mercury boundary of Seeback Peltier electron flow slots available on the boundary. The bonding energy that shifts on CPS by the deBrogle theta angle intensity of electron clusters with variable deBrogle theta angle confirming typical water affined waves of gravity at 45 degree. They really act as capacity twisters for z emissions as observed by the photon flied mechanical transformation
Advanced intellectual brain evolution: The evolution of an evolution may be different based on the genetic wave stimulation based on the planetary boundary scattering as feedback in similar lunar surfaces. This means a new advanced brainy man of extra terrestrial may be there elsewhere at outer planetary space as predicted by Hon.Stephen hawkings.
Conclusion: Whether the lunar boundary is inheriting a typical deBrogle matter wave resonance and oscillation in between dark stable reinforcement and unstable scattering requires an a twister ejections at which the gravity waves attract the sea water which is really a surprising investigation. The plunging of blackhole quantum dots of lesser mass into higher mass producing gravity waves will be new technical applications in future Robotics. It may be that the electron by its spin behave equivalent to one of blackhole quantum dots for a merger and emission of gravity waves that affine valance bonding a chemical bonding requires an investigation. The information on twisting z axis force
For the kind perusal of Royal astronomical society, London
Sankaravelayudhan Nandakumar
Ref:Royal society meeting-New finding on optical wave converted as gravity waves at certain deBrogle reflective angle of 45 degrees-reg [Incident: 110121-000037] news@nature.com
Optical waves could be changed as gravity z twisting waves in Robotic applications-reg [Incident: 110123-000048] news@nature.com
Your call CNSHD807096 regarding Re: Lunar boundary typically converting deBrogle wave scattering as gravity waves during full moon and newmoon dyas-reg has been received Outreach@stsci.edu
Valence electron forming blackhole quantum dots emissions at extremities for chemical bonding by its spin-reg [Incident: 110129-000025]
webmaster@lpi.usra.edu
Royal society meeting-New finding on optical wave converted as gravity waves at certain deBrogle reflective angle of 45 degrees-reg
@40. It doesn’t work that way. The formula e=mc^2 refers to energy and mass of the same type. In the simplest case, there are two types of mass: the mass of an object relative to itself (rest mass), and the mass relative to a reference frame. The latter is greater by the “gamma factor” (also called the dilation factor), which is
c / sqrt (c^2 – v^2)
where v is the relative velocity between the object and the reference frame.
The constant mass of a particle is rest mass, but the kinetic energy of a velocity, by the very definition of velocity, must refer to a reference frame. To convert mass of one type to energy of another type, the equation must be modified. (This is like saying that a dollar is 100 cents as a basic rule, but an American dollar is not exactly 100 Canadian cents.)
Putting in the gamma factor, you get
m c^3 / sqrt (c^2 – v^2)
If v >= c, this formula does not give a real number. Specifically, it is infinite for v = c or imaginary for v > c.
So the result of doing a basic calculation of the energy required to exceed c is, to use a cliche, “that there is no” result.
Ethan, we are all entitled to speculate, but allowing something like #54 is going too far. It looks like an output of Scigen.
Oops. I meant Chad, not Ethan.
Seriously, this seems so based on theory. The experimental result is so tiny that it allows all kind of mistakes. Where this article mention some, but not all that I’ve seen suggested. Also it may be a result of several mistakes, or unknowns, combined.
Now, if this experiment had given a greater error, easy to define as FTL then I might had been convinced but this seems to rest in its theoretical mathematical definition-space, in where those doing it more or less expect all they’ve done to be ‘correct’, no matter if they ask us to look at it too. Well, how can you expect anyone not to look at results suggesting FTL for particles of mass? And those of you enjoying mathematics finding their mathematical formulas correct 🙂
Does this mean that complicated mathematics based on assumptions of ‘correctness’ as in their prior measurements of ‘distance’ has taken over the experimental facts we’ve got before? That a full bodied mathematical hypothesis has to be ‘correct’ if the math is found to be correct?
Give me a few more easily defined experiments, preferably differently done from each other, and we will see if they get the same results. And you better remember that this is all at the ‘edge’ of what seems experimentally definable.
Because to think that mass do FTL, whereas light is a constant hurts my head terribly. As it should yours.
La beauté de la science est qu’elle n’a pas de dogme.
On peut tout remettre en question, même la vitesse de la lumière.
——————————————————————————
Acceptons le fait, jusqu’Ã preuve du contraire, que les mesures faites par le CERN sont correctes, et
supposons que les neutrinos voyagent à la même vitesse que celle de la lumière.
Que s’est-il passé lors de la mesure de la vitesse des neutrinos?
Il y a de la gravitation positive et de la gravitation négative!
Si la lumière parcourt une distance de 730 km à la surface de la Terre, le trajet ne sera pas en ligne droite, mais il épousera la courbure de la surface de notre planète, et subira une attraction égale durant tout le parcours, donc un freinage égal.
Lorsque le neutrino parcourt une distance de 730 km, le trajet sera en ligne droite. Ce n’est qu’au départ et à l’arrivée qu’il subit une attraction égale à celle du photon, soit la force du diamètre de la Terre. Mais durant la pénétration de la matière il subira une attraction inférieure à celle du photon, donc il sera moins freiné et parcourra 730km plus rapidement.
Remarquons que ce ne sont pas les mêmes 730km.
Pourquoi le neutrino subit-il une attraction inférieure en pénétrant dans notre planète?
Nous dirons que la matière en dessous de lui est celle qui contient le centre de la Terre. Comme il pénètre dans la terre, une partie de la matière est au dessus de lui, ce qui crée une attraction contraire à celle produite par la matière sous lui, tandis que la matière sous lui étant moins importante, il y aura moins de freinage.
Conclusion, durant la pénétration de la matière par le neutrino du fait de son voyage en ligne droite, le freinage, par rapport à celui du photon est diminué pour deux raisons:
la masse en dessous de lui est diminuée, donc moins d’attraction positive = moins de freinage.
la masse au dessus de lui crée une attraction dans l’autre sens, une attraction négative à soustraire de l’attraction positive, ce qui diminue encore le freinage. donc il sera moins freiné.
Le neutrino a-t-il la même vitesse que la lumière?
Il faudrait trouver un endroit, dans le vide, et loin de tous corps célestes, afin d’éviter l’effet de la gravitation et pour la lumière et pour le neutrino.
Meilleures salutations scientifiques.
P. A. Sarantopoulos.