I’m currently working on a book about relativity, but I still spend a fair amount of time thinking about quantum issues. A lot of this won’t make it into the book, because I can’t assume people will have read How to Teach Physics to Your Dog before reading whatever the relativity book’s title ends up being, and because explaining the quantum background would take too much space. But then, that’s what I have a blog for…
Anyway, the section I was working on yesterday concerned causality and faster-than-light travel, specifically the fact that they don’t play well together. Given Tuesday’s Toddler Toy Teleportation post, it was inevitable that I would start thinking about EPR-type entanglement experiments and how you would deal with them in the context of relativity. So, here’s the scenario: imagine we have a pair of quantum particles– electrons that can be in either spin-up or spin-down states, say– and we prepare them in an entangled state such that when they are measured they will always have the same state. If you look for spin-up vs. spin-down, they will either be both up or both down, never one up and one down. If you look in for some combination of states– spin-up plus spin-down vs. spin-up minus spin-down, say– you will always find both in the same state, whatever that may be.
Quantum mechanics says that this correlation will always exist, no matter what state you look for, and no matter how far apart the two states are when you measure them. This effect seems to fly in the face of relativity, and led Einstein to derisively call it “spooky action at a distance.” Unfortunately for Einstein, this prediction has also been comprehensively confirmed experimentally– the correlation between entangled states really does exist, and it does not seem to be limited by the speed of light.
Of course, this becomes really weird when you look at this in the context of relativity, which got me distracted for a little while yesterday afternoon. I think I more or less understand it now, and having spent some time trying to get my head around it, I figured I might as well type it up for the blog. So let’s imagine a scenario where we split our two particles up, and give one to a dog sitting at rest in a laboratory, and the other to a cat who flies off in a UFO at half the speed of light. Both dog and cat wait some time before measuring the state of their spins, and then much later get back together to compare their results. We can represent this scenario in a diagram that looks like this:
This is what’s called a spacetime diagram, and as you might guess from the clever name, plots what happens to the dog and the cat in both space and time. The distance they move in space is represented along the horizontal axis, while the time that passes is represented along the vertical axis. The diagram is scaled so that light, shows as the two red dashed lines, follows a line that is 45 degrees from the vertical, moving one foot to the right or left for every nanosecond of time moved upward, into the future.
The dog’s motion is represented by the brownish vertical line– she doesn’t move at all in space, but marches relentlessly into the future at a rate of one nanosecond per nanosecond. The cat’s motion is the black line, which moves in both space and time, one foot to the right for every two nanoseconds upward into the future. They start out at the same place at the same time, at the origin of the axes drawn here, then the cat moves off while the dog stays put.
According to the dog, she measures her spin at the point in space and time marked by the “1,” while the cat’s measurement is a short time later and some distance to the right, at the point marked by the “2.” The horizontal dashed lines represent particular instants in time, according to the dog, so we can clearly see that the dog makes her measurement, and then the cat makes his about half a nanosecond later.
The dog, then, would say that her measurement determined the outcome of both spin measurements. If she measured spin-up, that instantly and absolutely fixed the cat’s spin in the spin-up state, so the cat’s subsequent measurement only confirmed the already determined state of the spin. If she measured spin-down, the cat would inevitably obtain spin-down.
OK so far? Here’s where it gets weird: relativity tells us that the dog and the cat disagree about the passage of time. Specifically, the dog looking at the cat’s clock thinks that the cat’s clock is running slow, while the cat looking at the dog’s clock thinks the dog’s clock is running slow. More importantly, they also disagree about the synchronization of clocks– if the dog prepares a whole bunch of clocks at different positions so they all show the same time, the cat will say that the clocks are out of synch, with more distant clocks set a bit ahead or behind, depending on the direction of motion. And if the cat prepared a similar set of clocks at different positions moving along with the cat at the same speed, the dog would say that clocks at different positions showed different times.
We can show this on our diagram by doing basically the same thing we did with the dog, and drawing a bunch of lines indicating single instants in time according to the cat. The resulting lines look like this:
Points representing a given time, according to the cat, fall on slanting lines in the diagram. This has a very interesting consequence for our entangled state measurement. The two measurement events are plotted at the same points on the graph, but as you can see by counting lines or just looking at the numbers, the two measurements take place in the opposite order according to the cat. The cat would say that he measured the state of his spin first, and then the dog measured hers half a nanosecond later.
From the cat’s point of view, his measurement is the one that determines the state of both spins, while the dog’s just confirms the result. They both get spin-up, says the cat, because his spin-up result instantaneously put the dog’s spin into the spin-up state.
This would seem to be a huge problem. They can’t both be right, so whose measurement is it that does the job? The really weird thing is that it just doesn’t matter.
Why doesn’t it matter? Because no information has been passed from dog to cat, or from cat to dog. The only way they know about the correlation between their results is when they get back together later on and compare results (we can imagine that they do this many times, with many different entangled pairs, so they can determine the probabilities of all the possible outcomes. At that time, the pattern becomes clear, but until they compare lists of results, all they have is a random string of spin-up and spin-down results (or whatever other measurements they want to look for.
There isn’t a clear causal relationship between the two measurements, because there doesn’t have to be. And, really, it would be problematic if there were a relationship, because of what we see from the diagrams. If the dog’s view that her measurement determined the cat’s result were somehow the correct one, then the cat is in the odd position of seeing the effect (his measurement) take place before the cause (her measurement). If the cat’s view was the correct one, then the dog would have a problem with causality. Either way, it would be bad for physics– if you end up with a theory where effect can precede cause, it’s really difficult to construct any kind of coherent model of the universe.
“So, okay,” you might be saying, “the question of whose measurement caused the correlation is moot, because the correlation was there from the start. Both measurements were always going to come out spin-up, no matter what order they were made in, so it’s not surprising that they see a correlation.” This is a nice idea, and it’s more or less what Einstein was hoping for when he wrote about this kind of scenario with Boris Podolsky and Nathan Rosen in 1935. This would be a “local hidden variable” theory, in which the outcomes of the measurements are predetermined but unknown to the dog and the cat (the “hidden variable” part), and do not depend in any way on what the other animal does or when he or she does it (the “local” part, because the measurement is determined only by factors in the immediate vicinity of the measurement).
It’s a nice and comforting idea. It’s also dead wrong. In the 1960’s, the Irish physicist John Bell thought carefully about this problem, and proved a mathematical theorem showing that the correlations predicted by quantum mechanics can not possibly be satisfied by a local hidden variable theory. That is, there are experiments you can do whose results cannot possibly be explained by a model in which the results of the measurements were determined in advance. The result of the cat’s measurement depends on what the dog measured, and vice versa.
People have done these experiments– first John Clauser and colleagues in the 1970’s, followed by a really beautiful series of experiments by Alain Aspect in the early 80’s, and numerous others in the intervening decades. These experiments show conclusively that the quantum version of events is correct, and local hidden variables can’t be the real explanation (and there are other experiments with things like GHZ states that show the same sort of thing– local realism is pretty well dead). Nobody has done this yet with one of the measurements being made by somebody moving at half the speed of light (I’d love to see the grant proposal for that one…), but there’s no reason to expect that the result would be any different (in fact, you can argue that the existing experiments already cover this, given that there must be some set of observers moving at high speeds who would disagree about the order of Aspect’s measurements).
Which leaves us with a really strange situation. The results of each measurement depend on the other measurement, but the order in which those measurements are made is different according to the two different observers. Somehow, they both get the same result (or results in accordance with Bell’s theorem, if they’re not both measuring the same thing), even though the order in which the measurements are made is different for the two observers.
How do you resolve this? That’s why this isn’t going in the relativity book: because I don’t have a good answer. It just happens to work out that way, because that’s how the universe works. If you have further questions, all I can say is– LOOK! THE WINGED VICTORY OF SAMOTHRACE! (scampers off).
The one thing you can fall back on is that it doesn’t really matter how the measurements get correlated, because there’s no way to use it to send a signal from one observer to the other. As long as the only effect of entanglement is a correlation between two lists of random numbers, there’s no problem for causality or relativity, just for human physicists trying to fit some kind of coherent narrative to the whole thing.
Nobody has performed an experiment with one of the parties moving at half the speed of light, but an experiment has been done where “One detector is set in motion so that both detectors, each in its own inertial reference frame, are first to do the measurement! ”
Experimental test of nonlocal quantum correlation in relativistic configurations, H. Zbinden, J. Brendel, N. Gisin, and W. Tittel
http://pra.aps.org/abstract/PRA/v63/i2/e022111
I think I follow you. But you seem to gloss over a point that is endlessly confusing to me. You say
Either way, it would be bad for physics– if you end up with a theory where effect can precede cause, it’s really difficult to construct any kind of coherent model of the universe.
I know there is a reasonable relativistic definition of “preceed”, and it doesn’t quite match the normal non-relativistic definition. But is there a reasonable definition of “effect” or “cause”? What do these words even mean, precisely, in physics?
Chad, This is precisely the reason for Everett’s many world interpretation of quantum mechanics – it avoids action-at-a-distance paradoxes. You may not find it easy to swallow philosophically, but it is the only way that Quantum Mechanics can be consistent with Special Relativity.
Electron 2 sitting on Alpha Centauri does not need to ‘instantaneously’ change its spin state to spin up, because it has always *been* spin up. Or, in the other universe, it has always been spin down.
When you measure the spin of electron 1 here on Earth, you determine the universe you’re in, with the assurance that the rest of the universe is, and always was, consistent with that measurement.
I guess I’m becoming a senile old perv or something…every time I see the title of this post in the “Now on ScienceBlogs” row, it always seems to read “Snooky Action at High Speed.”
Okay, now that I’m here, I might as well read the whole thing…
As Bill K points out, your resolution of this is going to depend on your interpretation of quantum theory and also, to some extent, on your interpretation of relativity. Yes, there are indeed different interpretations of relativity that people argue about, although to a lesser extent than interpretations of quantum theory.
The situation is problematic, I think, for anyone who believes that the wavefunction is ontological and also that measurement outcomes are single-valued (i.e. non Everettian psi-ontologists to use Chris Granade’s terminology). This includes spontaneous collapse theories and (dare I mention it again?) Bohmian mechanics. Believer’s in these theories tend to think that relativity is only an effective theory. There is some preferred reference frame at the fundamental level, which means that simultaneity is not relative and either the cat is right about the causal ordering, or the dog is right about the causal ordering, but not both. Of course, the effects of the preferred reference frame are supposed to get “washed out” by quantum probabilities, which is why we can’t detect it in practice. Of course, this view is rather unappealing to most physicists, but it is not ruled out by experiment.
To get a fundamentally Lorentz invariant description you either have to be an Everettian, or you have to view the quantum state epistemically. I personally prefer the latter option. Of course, the question of whether the cat, dog, or both are right about the causality is ill defined in these approaches. In Everett it’s essentially because they were mistaken that there was a single outcome and in epistemic approaches it’s because the ontology is too vague to even parse what the question means. This is either deliberate (as in Copenhagen and its variants) or just because we haven’t worked out the full details of the interpretation yet.
Recommended reading:
John Bell, “How to teach special relativity” in Speakable and Unspeakable
Harvey Brown, “Physical Relativity. Space-time structure from a dynamical perspective”
Tim Maudlin, “Quantum Nonlocality and Relativity”
Note: I don’t necessarily endorse the views expressed in these texts, but the first two lend support to the view of relativity espoused by Bohmians/GRW-theorists, whilst the last considers just about every possible way under the sun that nonlocality can be made consistent with relativity, so they are definitely good things to read if you want to take the philosophical questions seriously.
This experiment can also be used to disprove the idea that you can “choose” the results of a wavefunction collapse (as pseudoscience advocates claim).
For example, if the dog could choose the results of her measurements, then she could choose a series of zeroes and ones forming a coded message. The cat’s measurements have the same results, so the cat would see the dog’s message. In this way, we could send messages backwards in time and start killing grandfathers and stuff.
I understand that information cannot be transmitted in the correlation of random measurements. After all you have to see both ends in order to see the correlation. But what about this experiment:
http://www.fortunecity.com/emachines/e11/86/qphil.html
I’m looking at the one with two down converters. Blocking the idler1 beam causes the interference pattern at the signal detector to disappear. Now an interference pattern can be observed locally unlike a correlation between distant measurements. So it seems real information is being passed. The beam lengths are the same so it isn’t FTL communication. But nothing travels between idle1 and the signal detector. There is no causal connection.
Am I understanding this correctly?
Electron 2 sitting on Alpha Centauri does not need to ‘instantaneously’ change its spin state to spin up, because it has always *been* spin up. Or, in the other universe, it has always been spin down.
While I agree that the interpretation you favor will influence the way you think about the problem, I’m not really comfortable with this phrasing, which is either too local realist (the state can’t be completely determined in advance, because you can do Bell tests with time-varying analyzers such that the measurement being made is changed after the particles have left the source) or non-local in time as well as space (with the electron leaving Earth somehow knowing in advance which measurement will be made at Alpha Centauri umpteen years later).
I agree that Many-Worlds offers some advantages in thinking about this, but I’m not sure how to make this fit with the way I like to think about Many-Worlds. If I come up with a better way of describing it, I’ll post it.
Matt Leifer wrote:
This includes spontaneous collapse theories and (dare I mention it again?) Bohmian mechanics. Believer’s in these theories tend to think that relativity is only an effective theory. There is some preferred reference frame at the fundamental level…. Of course, the effects of the preferred reference frame are supposed to get “washed out” by quantum probabilities, which is why we can’t detect it in practice. Of course, this view is rather unappealing to most physicists, but it is not ruled out by experiment.
I find it really hard to believe this isn’t ruled out by experiment. Bounds on the violation of Lorentz invariance are very strong. I’m not sure I understand what it means for the effect to get “washed out”, though. Why would the existence of a preferred reference frame not imply a Lorentz-non-invariant effective theory?
At any rate, I’m glad this post implicitly makes use of the important fact that the natural units in which c=1 are those where we measure space and time in feet and nanoseconds 🙂
I am in agreement with Chad RE: the supposed Many-Worlds fix for this problem. The version presented by Bill K seems to be a hidden variable theory. I admit to not knowing enough about relativistic QM to know how to resolve this problem, but the way I think about it is that we use QM to predict the outcomes of measurements, which it does in all these cases despite our human griping about causality and local realism. The hidden variable theory fails because it attempts to insert information (albeit unaccessible) about the outcome of a measurement before it has been made, which simply cannot be there given the quantum nature of the observables (I’ve always felt the tension between, for instance, trying to give a particle a definite spin value along two orthogonal axes and what we know about the observables’ failure to commute should have been a warning sign that the theory couldn’t be right).
So I interpret the theory as telling us that both measurements will be the same (or correlated/anti-correlated to some degree), and that we still cannot tell what either one will be beforehand, regardless of what frame we are in.
Geez, feet per nano-seconds, I’m having a forehead smacking moment. Good one (and excellent post).
Existing in two universes is not the same thing as having a hidden variable. When the dog & cat zoom off carrying their particles, there are actually two versions of each. When dog A checks his particle, he’s already in the same universe as cat A. He’ll never fly back home and find cat B; that cat’s inaccessible to him forevermore — and was ever since the two particles were “entangled.”
In fact, you could say the dog’s experiment doesn’t measure the particle’s spin so much as it reveals which of the two dogs he is.
What Eric said.
It certainly seems like having an outcome (or two) determined in advance would run afoul of Bell’s Theorem. But as I understand this — dimly — the math doesn’t work that way. According to the No World-eaters Interpretation, a measurement on Earth has two effects. It changes the wavefunction in a way that I think we can regard as local, so the blobs of amplitude corresponding to different outcomes or superposed states of one particle no longer interact with each other (much). And it also tells you which blob you live in. The latter changes the probability of what you would find if you could discover the state of the cat, off near Alpha Centauri, even though it doesn’t affect the cat itself. And this gives us the observed probabilities if the NWI gives us correct probabilities at all. Which latter question takes us into a debate that I still don’t understand.
Entanglement tells us that the values that we measure don’t really reflect the way the universe works. We measure “spin up” or “spin down”, but the actual QM value is “the same spin as that other particle” or “the opposite spin as that other particle”. These values propagate causally. You can’t entangle particles that can never communicate with each other. They have to be able to share information. The QM state can’t travel faster than light any more than a particle can.
The fact that nature has values of the form “the same as X” or “the opposite of X” where X hasn’t been measured yet, but still respects relativistic causality, suggests that there are more constraints on possible states than our intuition suggests. Maybe the holographic theorists are onto something. Our world is not 3D or 4D, but rather 2D or 3D.
I guess people who like labeling things would say I follow the Many-Worlds interpretation. I think of it as just quantum mechanics (or, more to the point, quantum field theory) without any added structure. The key point about relativistic quantum field theory that removes any difficulties is that interactions are strictly local. So the entangled state is created by local interactions. The interactions that later entangle these particles with their environment (“measuring” them) are also local. I honestly don’t even understand what the interpretational difficulty is supposed to be; there’s no more puzzle than if I wrote the same number on two scraps of paper and mailed them to two people on opposite sides of the globe who opened them simultaneously. There are no “hidden variables” because the overall state is still in a superposition of both possibilities in the end. Measurements look like they have definite outcomes only because they involve entanglement with a macroscopic apparatus, which is nearly orthogonal to the other possible measurement outcomes so that interference effects are small.
But then, discussions of this always seem to involve either people who unanimously agree or people who say that anyone who thinks like I do just doesn’t appreciate how deep and important the issues are.
I fail to see how manyworlds eliminates the nonlocality, if anything, it makes it even worse because now what happens in another universe influences what happens in this one through the wave function.
Whatever the interpretation in any case, if quantum mechanics is right, you will not be able to get rid of nonlocality. That’s the basic result of Bell. The question is, why is this nonlocality so weak, in the sense that it cannot be used to transmit signals faster than light? The book “Quantum paradoxes” goes a bit further into the analysis of non-locality and tries to determine if there are other non-local theories compatible with special relativity. From memory, I think the answer is positive, but they tend to be even stronger non-localities than the ones allowed by QM.
@onymous: You do indeed miss the point. Just construct the paper model you suggest and construct the corresponding quantum model for an entangled state. Now see if you can get the same correlations in your paper model as the ones that quantum mechanics predicts. You won’t be able to. Even if you manage to design a more complicated model with more numbers (hidden variables) on the paper than what you can strictly measure, you’ll fail if the experiment is sufficiently complicated.
There’s more going on than just the socks game (when one wears a black sock on his right foot, you automatically know he has a black one on the other), for an extreme example, I suggest you read a good paper on the Greenberger-Horn-Zeilinger state.
Raskolnikov says: “I fail to see how manyworlds eliminates the nonlocality, if anything, it makes it even worse because now what happens in another universe influences what happens in this one through the wave function.”
As I understand it, dog A’s universe doesn’t influence dog B’s universe at all, once they’ve split. Dog A returns to find cat A because they’ve been sharing universe A all along, not because of anything that happened post-split in universe B.
(But hey, you don’t need to buy dogs, cats, and rocket ships to try it out — just use this app I wrote: http://cheapuniverses.com/universesplitter/ )
I was glad you put in the statistical parenthesis “(we can imagine that they do this many times, with many different entangled pairs, so they can determine the probabilities of all the possible outcomes. At that time, the pattern becomes clear, but until they compare lists of results, all they have is a random string of spin-up and spin-down results (or whatever other measurements they want to look for.” The space-time diagram of all the measurements the cat and dog have to do to check QM predictions will, however, be rather more complicated than your single measurement stalking horse diagram.
It’s a fundamental part of the operational use of quantum theory that we have to construct ensembles of measurement results to compare with the probabilistic predictions of a quantum theoretical model for those ensembles. Each entry in an ensemble has to be as similar as possible to each of the other entries. One way to claim that individual measurement events that occur at different places in space-time are “similar enough” for us to say that they are contained in a single ensemble is for us to say that the different places are “the same place” FAPP. Otherwise, the measurement results would not just be 0/1, but would include the position in space-time at which each event happened, and both 0/1 and where each event happened would have to be modeled by QM (and hence we wouldn’t be working with a finite-dimensional Hilbert space).
The interpretation of the relativistic QM/QFT stuff is almost a complete mess. Probably just as well not to put it in a book on relativity, for dogs or otherwise.
@ Eric Daniels: Thanks, you’ve now convinced me that the MWI is completely useless. If the presplit universe is the universe where Schrödinger’s cat is still alive and dead or rather |alive> + |dead> or whatever fancy combination you want, how come I’ve never seen that cat?
From my layperson’s perspective I don’t get how a many-universes interpretation could be privileged or favored over a retrocausality interpretation. Admitting momentarily a role in this for “common sense,” we get:
1) Retrocausality is a no-no because it would lead to dead grandfathers and other paradoxes that are highly uncomfortable.
2) Many-universes is ?less of? a no-no because it doesn’t violate forward causality. (?)
The obvious (admittedly only “common sense”) problem with many universes, is what happens to the observer in universe A when a measurement causes them to diverge into universe B. We don’t see observers blinking out here, or see other observers blinking in here. We don’t see observer A suddenly lose half their mass at the moment the measurement is made, which would coincide with a half-massed observer A popping into universe B. Nor do we see half-massed observers popping into our universe. (Pun intentional.)
So then, assuming that two human bodies A and B exist in their respective universes A and B, what is it that makes the transition between universes at the point in time when a measurement is made?
Do observers A and B instantaneously switch places between universe A and B, so each appears to be following the track they were on whilst making their respective measurements? Or does some aspect of observer A switch into universe B, perhaps simultaneously with the equivalent aspect of observer B switching into universe A? (And doesn’t that sound uncomfortably like “the soul”?)
It seems to me that these problems vanish if we assume that information is a fundamental constituent. And pursuing that line of thought, we eventually run into Wheeler’s “it from bit” theory that information is not “a” fundamental constituent, but “the” fundamental constituent from which everything else arises.
(And what does mainstream physics think of Wheeler’s theory?)
If information is “a” or “the” fundamental, these problems appear to be solved. Though, our paradoxical grandfather might get a telegram from the future telling him that if he goes for a drive with his son they’ll have a fatal wreck, which he thereby chooses to avoid. This preserves the present we know of, where his grandson is sitting in his study, puffing on his pipe and contemplating the nature of Life, the Universe, and Everything.
We might postulate that the only information that successfully propagates backward across time is information that is consistent with observables in the present. Thus, of two backward telegrams sent by a grandson to a hypothetical grandfather, only the one that results in the grandfather surviving gets through (the one suggesting that he should go for a drive with his son that night, which would have them both die in a fatal wreck, mysteriously does not get through).
If you also assume that information has a slippery relationship with thermodynamics (for example it takes the same amount of energy to create the bit string 10101 as it does 01010, or to communicate ordered bits as random bits), then you don’t need a vast quantity of energy from some unknown source to make this work. Contrast to many-universes: whence comes the energy to produce all of these universes every time a measurement is made?
That is, it’s inconceivable that the creation of universes does not require vast quantities of energy; and we have no accounting of the source of such energy. This to my mind is a bigger problem than grandfathers getting retrocausal telegrams from their grandsons, that coincide with the actual course of events leading to the grandsons’ existence.
—
Now about Bell’s theorem.
The way I explain the “no signaling” characteristic is as if your two observers Alice and Bob are communicating via a one-time-key cipher, where the cyphertext is communicated instantaneously but the shared key material travels at c or below.
If you set up the experiment such that Alice can choose whether to measure the state of each of a series of photons, thereby causing Bob’s equivalent photons to behave identically: Bob receives a string of bits but has no way of knowing which ones were the result of Alice making a measurement…. until Alice returns and tells him which photons she measured, i.e. which bits she flipped.
Bob is holding the complete “message” but effectively in the form of a ciphertext to which he has no key until Alice returns.
However some hypothetical third party we’ll call Carlos (or perhaps “God”, since if “souls” are acceptable in the many worlds theory, then perhaps deities are acceptable here?:-) might exist in a frame of reference to see what’s going on at Alice’s end and at Bob’s end. Carlos would have the complete ciphertext (that which Bob has received) and also have the complete keystream (that which Alice is holding), and as a result know the message (that which Alice impressed upon the bit stream) before Alice was able to reach Bob at c or below to tell him.
—
Don’t hesitate to tell me if any of this is wrong, utter BS, or “not even wrong.” I’m not attached to my ignorant speculations.
Disclaimer: At the moment I’m in a common altered state for persons living in Northern California, thus the foregoing may be complete nonsense and I won’t know it until later.
Answer is nothing happens except the observer will then know he is in universe A; no “moving” or “transitioning” between universes. You seem to be a bit hung up on collapse interpretation where mysterious and magical things happen when observation takes place. No such mysticism in MWI, which is one of the reason why it should be preferred.
I think the simplest formulation would be that one: forget the idea that causality applies to physical properties. Causality applies to correlations between physical properties.
I am also tempted by going one step further: science is not a description of things, but a description of how things relate ( = of correlations ). This could be said of any language, and science is certainly a product of language.
Finally, Rovelli’s relational interpretation of QM is an interesting take on this problem.
MWI is the quintessence of scientific realism, but also a failure to account for our own existence.
If alternative worlds exist, then my past also exist and my future as well (they are also wave functions of universes with observers in it), and all possible pasts and futures. Everything exists indifferently. In that view, the concept of existence is so deteriorated that it becomes unoperational for addressing our common conception of existence (the present, the flow of time, the predictable probabilities of events, etc.).
I know this is about relativity and all, but your frame of reference is completely distracting me from the main point. What I’m referring to is using a “she” dog and a “he” cat. The archetypal dog is male and the archetypal cat is female (even more than the archetypal dog is male).
Once I adjust to this new worldview, I’ll be able to read the rest of the posting.
I know this is about relativity and all, but your frame of reference is completely distracting me from the main point. What I’m referring to is using a “she” dog and a “he” cat. The archetypal dog is male and the archetypal cat is female (even more than the archetypal dog is male).
The archetypal dog may be male, but my specific dog is female. And jealous.
@Raskolnikov: I’m sorry, but I guess I don’t understand your question. In the pre-split universe, there is one living cat, and a chunk of radioactive matter which hasn’t decayed yet. In the post-split universes, there are two cats, one living and one dead, but their timelines are inaccessible to each other. If you find yourself looking for a place to bury your dead cat, another version of you is petting your live one.
@g724: As for the apparent energy from nowhere required to create all these new universes, I think that’s just what it looks like from our point of view… it’s not whole new sets of a gazillion duplicated particles being constantly created out of thin air, it’s more like we’re traversing a multi-dimensional cauliflower which we’re only able to see one small part of.
Admittedly, MWI isn’t problem-free. But when you look at Chad’s original thought experiment through MWI goggles, doesn’t the paradox pretty much evaporate? And in fact, don’t most of the mysteries of quantum behavior start making much more sense? And for that reason, wouldn’t you want to keep those goggles near-to-hand?
By the way, since I don’t get to talk about this stuff to very many people in my day-to-day life (I’m a Disney animator — not many physicists around here!), I’d like to ask the non-MWI people about quantum computing: How would you explain any successes of quantum computing without buying into MWI? My understanding is that without MWI being valid, it wouldn’t work at all. Am I wrong? Naive?
@Eric: Well, as onymous points out, MWI just calls the wavefunction real and says no “superposition” ever disappears. (Hence, No World-eaters.) Quantum computers that work, and those that stop working due to decoherence, demonstrate the major causes that explain our observations according to MWI. And this seems conclusive if you already view MWI as the simplest theory. But if you choose to believe that the Born problem or some other piece of evidence will require a new theory, you can keep right on believing that. (Note however that the Born rule only seems like a problem in MWI because every part of the math until that point makes sense. In Copenhagen we could say, ‘Squared modulus, why not, none of this means anything anyway.’) And of course, people who don’t respect Newton’s method or the laws of probability can just add logically unnecessary ad hoc hypotheses, as long as they take care not to contradict the evidence.
As far as conservation of energy goes, probably someone who knows the topic better should address this one. I think it doesn’t apply because we’re not talking about creating anything. The interpretation admits no causes except amplitude flows in configuration space. And we know (right?) that conservation of energy applies to all configurations linked by amplitude. In MWI, we might take that as the meaning of the law.
I’m sure this is a stupid question, but the thing that confuses me about phenomena like this is why we don’t try to explain them in terms of hidden states (at least hidden to our current understanding/technology).
Those in the know seem to be unanimously of the opinion that the “randomness” of these phenomena is resolved only when its observed. I’m sure they’re correct, I just would like to know what evidence leads to that conclusion.
For example, we think radioactive decays are uncorrelated (except when they aren’t, ignore ones that are a result of being hit by stray products of an earlier decay) and that decay for an individual nucleus has the same probability of occurring at any time. (Correct?) Wouldn’t we expect to observe the same thing if either each nucleus had a hidden “time to live” state and that the process that synthesized those nuclei resulted in the appropriate distribution of the initial values of that state? What sort of experiment has distinguished between these two cases? Or, is it impossible even in principle to distinguish those cases and we have simply decided to adopt one perspective for simplicity?
(This is not a flame post, I genuinely would like to know what I am missing that seems clear to many others.)
@Douglas McClean
We know that the “randomness” is not resolved until we observe it, because each measured state for an observable is a superposition of states for another observable, and because we could have choosen to measure another observable. We know that a measured state is a superposition for other observables because superposed states may interfere together, and these interferences have a statistical effect on the outcome.
I hope this is clear enough.
“relativity tells us that the dog and the cat disagree about the passage of time”
So…what IF (I don’t know if that would be possible) you use entangled clocks? A property changing over time? That way entanglement would mean the “same” measurement for both of them at the same time, but of course the elapsed time is not the same for both… #
Question: Is the time/space distance between the cat and dog (points 1 and 2 on the graph) the same as the time light would need to cover the distance?
In other words, is the spin state actually being propogated between the two?
I ask because one explantion I read (many, many moons ago) was that the interference pattern (or lack thereof) was made possible because a photon is actually part of an photon/anti-photon pair (which is not based on charge). The anti-photon being a photon that is traveling backwards in time. So the photon (or in this case, spin state) goes from one to the other at light speed, but the converse is also true, just backwards in time.
That being why they can both say that they measured first and both be wrong and both be correct.