That’s the bill for the time that I spent on deciphering his supposed falsification of decoherence. I don’t want anyone to fall for his false argument, so here’s the correct explanation of the scenario, to save other people the trouble.
The center of his so-called “proof” is this modified Mach-Zehnder Inteferometer:
Light enters at the lower left, is split by a beamsplitter (which I’m representing as a beamsplitter cube, because that’s what I usually use, but it could be anything), redirected by two mirrors to a second beamsplitter where the beams A and B are recombined, then the recombined outputs C and D are themselves recombined at a third beamsplitter, after which they are detected at points E and F. The greenish box represents a phase shift caused by the environment, which people (and dogs) who understand quantum mechanics say causes decoherence, which ultimately makes the outcome look classical, with a 50/50 chance of showing up at either C or D rather than showing an interference pattern.
If you follow the link above, you’ll find a bunch of inordinately confusing notation attempting to describe this scenario mathematically, leading to the claim that the third beamsplitter somehow “undoes” the effects of decoherence. This is because Neil has done the math wrong, and leaped to interpret his wrong result as support of his predetermined conclusion.
I’ll go through the correct math below the fold, for those who care. If you don’t ordinarily like seeing equations full of complex exponential functions, skip the rest of this post. There’ll be something more enjoyable later.
The first step is to establish what the wavefunctions are at points A and B, immediately before the second beamsplitter:
(Apologies for the big clunky graphics, but I don’t have a better way of getting these in quickly.)
The wavefunction at A involves three exponential factors, one for the phase shift of π/2 that the beam gets on reflection from the first beamsplitter, one for the phase shift φ due to interaction with the environment, and one factor eikL due to the propagation of the wave. The wavefunction at B has not been reflected or shifted, so it just gets the eikL for the propagation. To keep things simple, we’ll assume all of the path lengths are equal.
The second beamsplitter combines these two beams to form the outputs at C and D that are usually the end of a Mach-Zehnder experiment. Taking these one at a time, we have:
The wavefunction at C is the combination of the wavefunction from A with an additional phase shift of π/2 due to the reflection, and the wavefunction from B with no additional phase shift. When you add these together, the two phase shifts of the A beam give a total phase shift of π, which is a factor of -1. Taking the norm of the wavefunction, to get the probability of detecting the light at point C, we end up with a sinusoidal function of the phase shift φ, exactly as we expect for an interference experiment.
The same process at D gives:
Here, the extra reflection phase shift goes on the B wavefunction, and as a result, we get a cosine rather than a sine, giving us the required phase difference between the two outputs. No matter what the phase shift φ is, the two sum to one, and when one is at a maximum, the other is at a minimum. That’s interference, which is unmistakable wave behavior.
If there were no shift, that is, φ=0, the wavefunction would be exactly 1 at D and exactly 0 at C. If you think about this process in a particle picture of light, rather than a wave picture, this means that each individual photon entering the interferometer would always end up at D. This is very different from the classical particle picture, which would have the particle end up at C 50% of the time and D 50% of the time.
“Decoherence” is the name given to the process by which random interactions with the environment destroy the coherent interference that leads to the quantum result, and push the system back toward the classical 50/50 result. In the particle picture, we can think of this as repeating the single-photon experiment many times, each time with a different value of φ. The final probabilities are obtained by adding together the results of many repeated measurements with individual photons.
To account for the effects of decoherence mathematically, we would average over some range of φ depending on the strength of the environmental interactions. If the range of φ is large enough (2π or more), both sin2 and cos2 average to 1/2, giving us the 50% probability of ending up at either detector that we expect for a classical situation. Thus, the random phase introduced by environmental interactions makes the quantum waves look like classical particles.
(Strictly speaking, if you really wanted to investigate the effects of decoherence in a meaningful way, you would do this all with density matrices rather than wavefunctions. There are limits to the amount of mathematical effort I’m willing to put into this, though.)
But Neil thinks he can prove something by adding an additional beamsplitter, so let’s plunge ahead to point E:
The wavefunction at E is a combination of the wavefunction from C with an additional reflection phase shift, plus the wavefunction from D with no extra shift. When you add these together, something odd happens– the terms depending on φ cancel out, leaving you with only a single term in the wavefunction. Without the second term that was present at C and D, there is no interference– when you take the norm of the wavefunction, all the complex phases cancel out, and you’re left with just a constant probability of 1/2.
So, what happens at F? Well, the phase shift due to reflection is on the other path, but the net effect is similar:
Here, we find that the constant terms cancel out, leaving only the term that depends on φ. But again, without a two-component wavefunction, there is no interference. When we take the norm of the wavefunction, the complex phases cancel, and we end up with 1/2 again.
What’s going on, here? Loosely speaking, the amplitudes of the wavefunctions at C and D are out of phase with one another by π/2. If you do a little bit of algebra on the equations above, you can easily make ΨC look like a sine function, and ΨD look like a cosine. When you recombine those two, the additional phase shift due to reflection puts them a full π out of phase with each other– the sine becomes a -cosine, or the cosine becomes a -sine. When you add those together, they destructively interfere, wiping out the earlier interference effect. It’s true that, as Neil claims, the phase shift introduced by decoherence doesn’t matter for the final result, but that’s because there is no interference.
(This depends on the path lengths all being equal. If there’s an overall path length difference, you should still get interference, but the math gets nasty. Update: It does not depend on the reflectivity of any of the beamsplitters– all you get from unequal intensity is a lowering of the contrast in the interference signal at C and D.)
Where’s Neil’s mistake? He fundamental mistake is that he’s too attached to his notion that decoherence is wrong, and too enamoured of his own cleverness. When his incorrect math seemed to align with his pet notion, he grabbed onto that immediately, without checking it over.
Mathematically, though, I believe he screwed up the reflection phase shift, but I can’t be bothered to sort through his garbled notation to figure it out for sure. When you actually work out the details, his conclusions are wrong, and the normal quantum understanding of the world is correct.
And here’s an important note: I am not good at this sort of thing. I’m an experimentalist, not a theorist, and I do not do research in quantum foundations. The people who do this sort of thing professionally are vastly smarter than I am, and the chances that they somehow missed an effect as simple as adding a third beamsplitter to a standard Mach-Zehnder interferometer are totally negligible. If you see some outsider like Neil claiming to have blown away the last few decades of quantum foundations research, remember that.
This post serves two purposes: first, it’s a public record of the error of Neil’s “disproof” of decoherence. Not that I expect this to shut him up– he’s never bothered to read and understand anything I’ve written on the subject of quantum mechanics before, so I don’t expect him to start now. The next time he starts touting this “falsification of decoherence,” though, you can point to this to show that he’s wrong.
It is also a public statement that I am done dealing with Neil Bates. The errors in math and reasoning in Neil’s post are the sort of thing I expect from a sophomore physics major, and when I grade sophomore level work, I get paid for it. (Old academic joke: “I teach for free. They pay me to grade.”) So, until and unless Neil sends me $160, I will not be reading anything else he writes, in comments here or elsewhere.
Goodbye, Neil. I’d say it’s been fun, but really, it hasn’t.
This is fitting http://xkcd.com/675/
Appealing to the somewhat controversial “everything that can happen does” statement from a couple of posts ago, this can be done really simply in a path-integral formulation.
Every path has the same path length, so we can ignore all the e^{i k L} s.
Each path gets a factor e^{i pi / 2} = i for each reflection in a splitter and e^{i phi} if it passes through the phase-shifter at A. Thus, the path BCF is given a phase 1, since it never gets reflected by a beam splitter, nor does it go through the phase shifter.
To maintain normalization, each path should get a factor 1/sqrt(2) every time it passes through a splitter, but this also is the same for all paths, since all paths pass through three splitters. Each amplitude, therefore gets a 1/sqrt(8), and so when I compute the probability I’ll divide the amplitude-squared by 8.
This gives the following paths their corresponding phases:
All paths that end at E:
ACE i^3 e^{i phi} = -i e^{i phi}
ADE i e^{i phi}
BCE i
BDE i
To get the probability of getting to E, we must sum these, take the sum’s magnitude-square, and divide by 8 for normalization:
| -i e^{i phi} +i e^{i phi} + i + i |^2 = | 2i |^2 = 4
Dividing by 8 for normalization, gives 1/2. The probability of getting to E is 1/2. Hopefully, the same will be true for F.
All paths that end at F:
ACF i^2 e^{i phi} = – e^{i phi}
ADF i^2 e^{i phi} = – e^{i phi}
BCF 1, as mentioned earlier
BDF i^2 = -1
Summing, and taking the square
| – e^{i phi} – e^{i phi} +1 -1 |^2 = |- 2 e^{i phi} |^2 = 4.
Dividing by 8 for normalization gives 1/2. The probability of getting to F is 1/2.
As far as I’m concerned, that’s the end of the story. We can ask, how did the information about phi disappear? Notice what happened: when we computed the probability to arrive at E, the paths that went through A canceled one another; when we computed the probability to arrive at F, the paths that went through B canceled one another. If those terrific cancellations hadn’t happened, then when taking the magnitude-square, there would have been cross terms that depended on phi.
Notice that the cancellation depends entirely on there being a third splitter–as Chad points out |Psi_C|^2 and |Psi_D|^2 do encode information about phi. This can be seen easily in the path-integral formulation too, but I’ll leave it as an exercise to the reader (!).
But the cancellation does happen in this case, so the information about phi is erased. In fact, this is just another instantiation of the well-understood quantum-eraser experiment. The elimination of dependence on phi is well-known. Nothing new to see here, folks.
Your patience is remarkable. Or maybe foolhardy. Out of the ordinary, at any rate.
I corresponded with Neil by e-mail last night, after he e-mailed me yesterday afternoon. I didn’t work through his discussion in detail, as you have done, quite nicely. Thanks for this posting.
One aspect of Neil’s discussion on his blog is that he doesn’t cite any of the existing critiques of decoherence interpretations. It’s important to place a new criticism by showing how it’s different from what has been said before. Neil’s discussion, if it were correct, has some overlap with the standard worries that people have about decoherence interpretations, which I consider to be related to its particular way of being a no-collapse interpretation. I find that an attempt to discuss why some new criticism I might have is different from whatever standard worries there are most often leads to a realization that I have nothing new to say. If I find that I do have something new to say, typically the process of understanding how it is different from past ideas leads to a realization of how my new discussion can be presented more clearly. Funnily enough, in the face of Neil’s worries, I would say that physicists generally do not accept that decoherence interpretations solve the measurement problem, although it is certainly granted that in careful hands it has produced several mighty attempts and some very interesting mathematics (See, for example, the Stanford Encyclopedia of Philosophy entry on decoherence, “As pointed out by many authors, however (recently e.g., Adler 2003; Zeh 1995, pp. 14-15), this claim [that decoherence provides a solution to the measurement problem] is not tenable.”). People who don’t worry about really small probabilities can find decoherence interpretations quite satisfying, and in any case the mathematics associated with decoherence interpretations is very instructive.
For a decoherence interpretation, evolution is always invertible in principle because it is always unitary. There is no distinct non-unitary collapse evolution such as that of Luders-von Neumann. The unitary evolution in general applies to a macroscopic system (if not to the whole environment), not to the tensor product of the wave functions of a few photons, so inverting evolutions that introduce enough interactions with the environment is FAPP impossible, because it requires what might be called a “quantum Maxwell’s demon”, reversing the effects of everything that caused what quantum optics so happily describes using a simple random phase shift (I love that such glib engineering stuff works and makes it possible to do practical calculations easily that would otherwise be difficult or intractable, but, for anyone who wants to do foundations, resorting to such engineering formalisms had better be carefully justified). The in-principle possibility of finding an apparatus that applies the inverse of the original evolution is one way of putting the worry that has caused decoherence not to achieve general acceptance (but see the SEP article cited above for a careful presentation).
Well, thanks for the free publicity Chad, but I’m not ready to pay you for that yet [see below]! Note, I mentioned you but wasn’t harsh about it, maybe you noticed the snarky indirect comment about “the doghouse” per your book? (Mainly, it was convenient to me to use an MZ setup similar to yours, and there are various explanations of DI around.) Some readers in my comments said they think I’m on to something. Well, maybe. I want readers there, here, and anywhere to check it out and decide for themselves. I want someone to do an experiment too. I’ll study the critique and comments here, but for now a summary and to warn everyone the bad guy is walking down Main Street. Still, I want to have a fruitful discussion, no reason why not!
Very important, as I note: this is not just about the decoherence interpretation, that claims we can account in some way for not seeing macro superpositions. As I wrote: Its [the experiment’s] importance goes beyond DI, since such information should simply have been “lost” period, apart from any interpretative framework.
Since my post is rather long, I will make my own briefer summary here (check the diagram too, and sorry I haven’t gotten the better fonts in; having trouble there): BS1 splits the incoming photon WF into different amplitudes, such as 0.8 one leg and 0.6 in the other. They are recombined at symmetrical BS2 in the usual way. If in phase, the combination produces a certain output (not 100% as with 50/50 input from BS1.) It’s easy to forget, the output from BS2 isn’t just “statistics” of hits. There is a specific WF along each output channel (very important!) We recombine them at BS3 in terms of rules governing the amplitudes and phase relations.
If we introduce random confusion of phase relations, the output from BS2 is 50/50 from each channel (even with asymmetrical input.) AFAIK, this is considered equivalent to a “mixture” (and as I understand, from Chad’s post I referenced.) But that is only in terms of bare statistics of hits. Those stats ignore the remaining, hidden relationship between the WFs exiting along the BS2 channels. When we recombine again into BS3, we find we can recover the original amplitudes. So output from BS3 will again be 0.8 and 0.6 (64% and 36% intensity/count stats.)
So what? Well, if the output from BS2 was just a 50/50 mixture – as if, one photon at a time out either channel A2 or B2, but not both) then the output from BS3 would be too. (Easy to figure.) No mere quantum eraser, since you’re recovering info post-decoherence (they keep clean phase relations, no? And no twin entanglement here.) So therefore I say, the output can’t be treated like a mixture, like the “collapse” of the WF at BS2 from decoherence, like becoming discrete photons at that point.
Maybe some of you misunderstood the point? I don’t claim to recover phi, the phase difference itself. That doesn’t matter. It’s the output stats from BS3 that matter, that can’t be as predicted by feeding a “mixture” into it. Sorry if – if – it wasn’t so well formed but the point should be clear from the concluding remarks.
Citing other work: Hey, that’s a blog post and I wanted to get the idea out there. I did mention that Penrose makes nearly identical complaints in Shadows of the Mind (and uh, is anyone here impressed by those? He picked on the foundational argument of DI, similar points to my complaints here and there, at some pieces at Stanford Ency. of Phil, etc.) Yes, comparisons matter but the relevance of a counter argument stands on its own too. And again (can hardly over-emphasize: the recovery of this info, thought “lost” in most treatments AFAICT, is important regardless. Best way IMHO to regard it: ask yourself, if the outcome is as I describe then what does that show you? (Of course, if you think I got the outcome wrong, or the implications wrong, pls. LMK. But be aware I followed rules for combining wave states that have always worked for interference calculations. As for the reflection shift, I use the 90 degree “i” shift (as did Penrose) because I think it’s convenient, and it works out the same as long as consistency is right.) As for being enamored of one’s own cleverness, that is one of my complaints about DI. I may be “guilty” too. Whatever.
Chad: If this concept of mine gets attention, I turn out to right, and get credit for it, your efforts would be well worth $160. I’ll pay you when it’s clear you deserve reward for making me famous (or infamous, if that’s what decoherence enthusiasts will call it.)
(PS, quick follow-up: the angle of phase shift I call u (to avoid using a symbol, which I’m having tech troubles with), is as you would expect not a constant. It’s a variable, changing around each time the “confuser” or environment meddles in the path. I couldn’t care less about finding it later.) BTW also, Roger Penrose is not an outsider. And don’t outsiders (~) often come up with important insights? No ad homimen sentiments or appeals, please. It is what it is.)
Last comment for awhile, and too bad Chad won’t defend himself from any misunderstandings of his point so I’ll just have to quote him and interpret best I can. Commenter Evan Berkowitz said, “To maintain normalization, each path should get a factor 1/sqrt(2) every time it passes through a splitter,…” Uh, no. I specifically said it is crucial to my point, that we start with uneven amplitudes out of BS1. Only BS2 and BS3 are 50/50 and thus sqrt (0.5) multipliers.
That leads to: it isn’t always and all, about “interference” per se. I am saying, we can recover amplitude information (that from BS1 output later) apart from other traits or definitions involving “intereference.” That’s a sloppy term anyway referring to evident distributions of amplitudes that were added (superposed) and then squared. But waves “interfere” all the time, in the basic sense of adding amplitudes. It just doesn’t always “show.”
Money quote: Above, Chad writes Thus, the random phase introduced by environmental interactions makes the quantum waves look like classical particles. OK, I have confirmation of the notion I’m trying to disprove (I’ve heard that before, just didn’t want to misrepresent a specific writer.) But unless I really did make a mistake in adding up the waves, our being able to recover the original, unequal amplitudes out of BS3 would of course be inconsistent with an output from BS2 that was like a classical mixture.
So can we all agree, the judgment turns on whether I really made a calculation mistake? Hence my claimed implications are valid if the output is as I state. That can easily be checked. I’ll get back to that later, someone else please cross-check. tx
I don’t think Chad has to defend himself of anything, you’re the one making all the claims here.
Peter Morgan, #4: Funnily enough, in the face of Neil’s worries, I would say that physicists generally do not accept that decoherence interpretations solve the measurement problem, although it is certainly granted that in careful hands it has produced several mighty attempts and some very interesting mathematics (See, for example, the Stanford Encyclopedia of Philosophy entry on decoherence, “As pointed out by many authors, however (recently e.g., Adler 2003; Zeh 1995, pp. 14-15), this claim [that decoherence provides a solution to the measurement problem] is not tenable.”). People who don’t worry about really small probabilities can find decoherence interpretations quite satisfying, and in any case the mathematics associated with decoherence interpretations is very instructive.
I would agree with that, more or less. I don’t claim that decoherence is the whole and only solution to the measurement problem, in the sense of “why do we only see a single result to the outcome of a quantum measurement?” After reading up on things for the book, I’m somewhat more inclined towards a Many-Worlds kind of view than I used to be, but even the Many-World plus decoherence picture leaves some issues regarding probability and the like. (I tend to thing a lot of those concerns are overblown, but I would never make it as a philosopher). And even if you prefer some collapse-type interpretation, decoherence has a role to play.
The relevant paragraphs from Chapter 4:
The idea of decoherence is by no means exclusive to the Many-Worlds Interpretation of quantum mechanics. Decoherence is a real physical process that happens in all interpretations. In the Copenhagen Interpretation, it serves as the first step in the process of measurement, selecting the states you can possibly end up in. Decoherence turns a coherent superposition of two or more states (both A and B) into an incoherent mixture of definite states (either A or B). Then some other, unknown, mechanism causes the wavefunction to collapse into one of those states, giving the measurement result. In the Many-Worlds view, decoherence is what prevents the different branches of the wavefunction from interacting with one another, while each branch contains an observer who only perceives that one branch.
In either case, decoherence is an essential step in getting from quantum superpositions to classical reality. All of the concrete predictions of quantum theory are absolutely identical, regardless of interpretation. Whichever interpretation you favor, you use the same equations to find the wavefunction, and the wavefunction gives you the probabilities of the different possible outcomes of any measurement. No known experiment will distinguish between the Copenhagen Interpretation and the Many-Worlds Interpretation , so which you use is essentially a matter of personal taste. They are just two different ways of thinking about what happens as you move from the probabilities predicted by the wavefunction to the result of an actual measurement.
(Copenhagen and Many-World are the only two approaches I discuss in the book, for reasons of space. There’s a passing mention of Bohmian mechanics, but no more than that.)
There are certainly interesting issues remaining to be explored regarding decoherence– Wojciech Zurek still has a job, after all. None of those issues involve adding a third beamsplitter to a Mach-Zehnder interferometer, for the reasons given above.
In the paragraph that starts “If there were no shift, that is, Ï=0, the wavefunction would be exactly 1 at B and exactly 0 and A.”
Don’t you mean D and C, both there and later in that paragraph?
BTW, nice complementary analysis, Evan.
Don’t you mean D and C, both there and later in that paragraph?
Yes. An earlier version of the figure had those points labeled A and B, and the current E and F labeled C and D. I’ll fix it in the text.
Stephen, anyone: it should be clear, I am not the only one making claims here! Chad’s post certainly claims I made an error, and that is of course game for further criticism. Maybe some segmentation and clarification of my point is in order. (It might make some sense to look at what the original subject of critique has to say?)
(1) First, there is an empirical claim about the output from the BS3 channels. I’m saying (calculated using a simple convention adopted by Penrose, the 90-degree (i) phase shift off partial-silvered surfaces and neglected other intrinsic alterations for simplicity – equivalent results, check if you must) that the statistical output from what Chad labels E and F would be a^2 and b^2 respectively, where BS1 splits in the amplitude ratio a:b along the channels Chad labels B and A as I see in current illustration. (His notation may confuse some. In my post I use a and b for differential [real] amplitudes from BS1. Note that BS2 and BS3 are 50/50. REM also a phase change “u” or phi as you wish, is imposed between BS1 and BS2 by the confuser.) During course through the double MZI, we can multiply complex phases accordingly (equivalent to adding simple angles.) I also say, we can recover that phase change as well, but that is a lesser and unneeded claim that requires further interactions. Focus on the unequal outputs if that helps.
That is directly found by recombining amplitudes and phases output by BS2. REM it’s not just “statistics” that are available at C and D. BS2 combines A + B in one phase relation to output from C and another relation from D. When they are combined again at BS3, the components add or subtract to get the claimed statistics. This is the straight-up optics before any hits are counted. All I can say is, do it directly in the manner I originally described and I’d expect you get the same result. I am of course waiting until the WF passes the entire system before reducing it to statistics, not imaging some premature reduction at BS2. Chad’s claim of my making an error seems not to follow that direct reasoning all the way through or misapplies my argument (OK, maybe some my fault and I will retool as looks appropriate.) This is an experiment someone can and IMHO should do. It is of course relevant to the DI, but significant to recover such amplitude data from an apparent mixture.
(2) Second, there is the claim that such a result makes life difficult for DI. I would of course expect that to be a debatable, and it seems some here are conflating (1) and (2). But if the output at BS2 was indeed a mixture (50/50 out C and D) then of course the output from BS3 would also be 50/50 along E and F. That’s a clear, empirical difference in predicted output. What it means …
OK, I found a big clue after finding some time to study this post carefully. It’s a mistake about BS1 in my example. Money quote: If there were no shift, that is, Ï=0, the wavefunction would be exactly 1 at D and exactly 0 at C. No, that is true only if BS1 is 50/50. If not (please walk it through, OK?) the output is not 0,1 even with no phase shift. No wonder there was such confusion. The unequal amplitudes from BS1 are the basis of my whole argument, and the very information I want to recover out of BS3. This requires a correction …
[To be sure in face of hurried comment above: I follow the various combinations of amplitude and phase all the way up to coming out of E and F, and then I put in counters for the final statistics. Also, from BS2 make sure to understand, with a true mixture the output from C and D would be like photons alternating out of C or D, but not both at the same time as waves.]
Something tells me that the poor sod who actually tries this and doesn’t come to the conclusion you want will be pummeled with pages and pages of overly wordy blog comments demanding they try again but with 4 beam splitters…
Yes, I see the update references unequal reflectivity with subsequent “lowering of contrast” (unequal amplitudes) out of C and D. But that makes all the difference in the world. That lower contrast at BS2 is erased (if I did the integration right, LMK if not) by full decoherence. Getting it back again at BS3 (top right) would, again, not be possible if we consider that decoherence turns output from BS2 into a 50/50 “mixture.” It shows, the output from C and D remains wave-like and comes out of both channels together, even with decoherence. BTW I aspire not to post long and in chunks but that’s easiest for me at the moment. Now there’s clearly some grounds for critics to check this over again.
Stephen, did you calculate this? Did the OP recalculate the final result given the admitted alteration for BS1? Lots of words in original post too, it carried an update that belatedly referenced the basis of my point, and ended up still fundamentally mistaken [regarding E and F]: When we take the norm of the wavefunction, the complex phases cancel, and we end up with 1/2 again. No, we don’t. That can be verified by anyone who actually does the calculation using the original, unequal amplitudes from BS1. My word count might be about right for an article, they just happen to be on a blog.
Sorry, I’m only pummeling those who don’t work it through, or who do but get the wrong result. And most of you would be steamed by such an acerbic and IMHO misdirected putdown as here. Kudos though to Chad for allowing free debate unlike eg LuMo and many political bloggers.
so it’s beamsplitters all the way down instead of turtles!
Apparently you never bothered to check anything either.
I was contacted by Neil via email concerning what happens if the initial beam splitter isn’t 50/50. This is also easy to compute in the path-integral formulation.
Let’s assume that the splitter sends a fraction a^2 through the part of the path A (the part with the phase-shifter) and sends a fraction b^2 through the part of the path B (without the phase-shifter), so that a^2+b^2=1. Thus, the phase gets an a or a b, respectively. The other two beam-splitters are 50/50, so give a factor of 1/sqrt(2), which I’ll call s for brevity (that is, 2s^2=1, which is useful later). If we keep track of these, we will keep the right normalization throughout.
All paths that end at E:
ACE a s^2 i^3 e^{i phi} = -i as^2 e^{i phi}
ADE a s^2 i e^{i phi}
BCE i b s^2
BDE i b s^2
To get the probability of getting to E, we must sum these, take the sum’s magnitude-square:
| -i as^2 e^{i phi} +i as^2 e^{i phi} + i bs^2 + i bs^2 |^2 = | 2i bs^2 |^2 = b^2
Thus, the probability of getting to E is b^2. Hopefully, the probability of going to F will be 1-b^2 = a^2.
All paths that end at F:
ACF as^2 i^2 e^{i phi} = – as^2 e^{i phi}
ADF i^2 as^2 e^{i phi} = – as^2 e^{i phi}
BCF bs^2
BDF i^2bs^2 = -bs^2
Summing and taking the square gives
| – a s^2 e^{i phi} – a s^2 e^{i phi} + bs^2 – bs^2|^2 = a^2
So the probabilities sum to one, as expected. This calculation is correct. In fact, it’s easy to see why this setup always gives the same probabilities as the first beam-splitter: it’s those handy cancellations again. As before, either both paths that went through A canceled one another (leaving us with a probability b^2) or both paths that went through B canceled one another (leaving us with a probability a^2).
The part that I don’t understand is what this calculation has to do with one interpretation or another. Certainly we can all agree that the result of the calculation is correct, and is independent of formulation. I don’t want to take a position on the decoherence interpretation agrees with this. Indeed, I try not to worry about interpretations too much. If I’m not feeling philosophical I subscribe to the interpretation of “I predicted the outcome, so my job is done.” If I’m forced to take a stand I tend to think in terms of either the Copenhagen or the Many Worlds interpretations.
To be clear, I have no understanding of how this result falsifies (or fails to falsify) the decoherence interpretation. But it is the correct quantum-mechanical prediction.
To be clear, I have no understanding of how this result falsifies (or fails to falsify) the decoherence interpretation.
That’s because it doesn’t say anything at all about the decoherence interpretation. It’s a totally uninteresting result of the reflection phase shifts at beamsplitters, and has nothing whatsoever to do with quantum measurement.
Evan, thanks for backing me on the technical correctness of my predicted outcome. (Note, that doesn’t depend on whether the phase shift changes from instance to instance – since it all adds up the same regardless of phi.) Just on that basis alone, Chad should issue a correction (not just in comments, the erroneous depiction of BS3 output as 50/50 is still up – “and we end up with 1/2 again.” !) and apology for not being careful enough to check that. He implied the original BS was 50/50 and later, that unequal reflectivities were irrelevant in a glib aside. To make it easier, I apologize here for some sloppiness in my wording and notation, etc. (since improved) that made it harder to follow. Even so, a critic must verify the proposal, then recalculate anyway – doable given my clearly stated, sample starting amplitudes a = 0.8 and b = 0.6. Regardless of further implications, the misrepresentation diverted from clear appreciation that “the output from BS3 is in the same proportion as from BS1.”
OK, so do such results impact decoherence theory? I think so, but radical DI comes on as not the weaker sort of theory as Chad presents it. (He even says, at Many-Worlds and Decoherence:
If you could somehow keep track of all the interactions, say by measuring the precise state and trajectory of every air molecule along each path, you could recover the pattern by post-selection: …. Read it.) But others have said, following what I call “strong DI,” that decoherence explains “why we don’t see macroscopic superpositions”, implying that there is no collapse puzzle, as to why the alive/dead cat states don’t continue to co-exist in our universe, etc. Also, we see the claim that decoherence converts the output “into a mixture” – that has testable consequences, as well as that the DM is an appropriate representation of any situation.
Above we have:
Thus, the random phase introduced by environmental interactions makes the quantum waves look like classical particles. Well, “look like” is vague, and sure the statistics alone “look like” classical particles. But that observation is trivial, and does not validate further implications such as output truly becoming “a mixture” instead of “a superposition.” A mixture would imply that sometimes a photon goes out this C and sometimes this D, rather than as superposition out of both – or at least, crucially – that there was no way at all to tell the difference once they were already recombined at BS2 (which is not the same as merely saying, we could recover interference by fixing waves before entering?) What else is the point of saying the output is a mixture? But if BS2 output is “a mixture,” then BS3 would spit out at 50/50. So of course BS2 output remains a superposition, in order to get the now-consensus output ^2/b^2 from BS3.
Maybe Chad is off the hook by not promoting strong DI (will update my blog as indicated), but others out there have framed the issue as if, now the collapse issue is effectively resolved, the physical state really is like a mixture (I showed that it can’t be in *all* respects), etc. I can agree with Weak DI (effective destruction of interference pattern per se, maybe Chad’s view), but it says nothing of deeper importance, and yields no insight into why we don’t continue to have both macro superpositions together etc. But now we see that Strong DI is literally wrong and would predict a false outcome (again, mixture entering BS3 means 50/50 output. The DM would also be an inappropriate representation AFAICT.)
Their argument was bad anyway – so what, if sloppy relative phases? (Put aside MWI, and after all – why doesn’t the first beamsplitter in such arrangements, “split” the wave into different worlds? Then, no interference ever.) And how can phases being unequal to at other times, explain (in any sense of the term) something here and now? Schroedinger’s cat should remain in both states. Now we can’t explain in any sense, why she doesn’t …
Like I kept trying to say earlier, it doesn’t matter that they can’t interfere anymore, because the superpositions are still there. Now we know, since we can find them with that all-important extra beamsplitter. Some people, at least, should change their tune.
Hi Chad,
It appears that there is a lot of flak about Neilâs experiment of which I think the central point should be if the setup described demonstrates the effects of decoherence as it is communally used in relation to many interpretations of quantum theory. Of course where this whole concept first began was with David Bohm in his 1952 paper regarding his hidden variables interpretation where he introduced the concept with him not actually calling it that and yet stands today still as the best review of the subject as to how it should be applied and where it holds significance. If you look at this from the Bohmian point of view decoherence on its own in regards to any interpretation of quantum theory doesnât have the ability to deal with the measurement problem as to provide a solution.
In fact its only when you have a theory comtaining a dual ontological theory does it aid in bringing this to a complete resolution as it not requiring a collapse mechanism since there is nothing to collapse as particles always doing the pointing in our instruments and the waves having to have them find themselves there. I would therefore recommend for anyone that would like to better understand decoherence and its role in quantum theory they begin first with Bohmâs papers with particular attention paid to the the second one of the series . The bottom line for this experiment in Bohmian terms is if there is only one particle to consider then there can only be one result which will be found in the place that the wave and itself with its interactions has had to be. Now Iâm not suggesting you all jump on the Pilot Wave bandwagon yet in many situations such as this it is instructive as being able to give a clearer picture when it comes to considering such things. More to the point Durr et al sums up the distinction quite nicely in in their 2003 paper entitled âQuantum Equilibrium and the Role of Operators as Observables in Quantum Theoryâ in section 2.3 where they state.
âWhile in orthodox quantum theory the âcollapseâ is merely superimposed upon the unitary
evolutionâwithout a precise specification of the circumstances under which it may legitimately
be invokedâwe have now, in Bohmian mechanics, that the evolution of the effective wave function is actually given by a stochastic process, which consistently embodies both unitarity and collapse as appropriate. In particular, the effective wave function of a subsystem evolves according to Schr¨odingerâs equation when this system is suitably isolated. Otherwise it âpops in and outâ of existence in a random fashion, in a way determined by the continuous (but still random) evolution of the conditional wave function t. Moreover, it is the critical dependence on the state of the environment and the initial conditions which is responsible for the random behaviour of the (conditional or effective) wave function of the system.â
Best,
Phil