I’m a little cranky after a day of reviewing grant proposals, so it’s possible that I’m overreacting. But commenter Neil B has been banging on about quantum measurement for weeks, including not one, not two, but three lengthy comments in Tuesday’s dog post.
For that reason, I am declaring this post’s comments section to be the Official Neil B. Quantum Measurement Thread. Until such time as I declare the subject open again, this is the only thread in which I want to see comments about quantum measurement. Attempts to bring the subject up in comments to other posts– even other posts having to do with quantum mechanics– will be disemvowelled.
So this is not just a public slapping down, I’ll provide some thoughts on the subject below the fold.
There are three different but related things that people mean when they talk about problems of quantum measurement. One problem is, in essence, “If the world obeys quantum rules, why does everything we see look so classical?” We never see interference of macroscopic objects, or superposition states of macroscopic objects. A tennis ball is always either here or there, not here and there at the same time.
This problem is adequately solved by the idea of decoherence, as discussed previously. The basic idea is that unmeasured interactions with the environment make small changes in the wavefunction the prevent us from seeing any sign of interference behavior for macroscopic objects. There is interference, but the pattern that results is always changing, and thus can’t be detected through repeated measurements. This result is indistinguishable from what you would get if the particles you’re looking at obeyed classical rules.
This is not, by the way, restricted to a Many-Worlds Interpretation view of quantum mechanics. Decoherence is a verifiable physical phenomenon that occurs no matter what interpretation you favor. The language you use to talk about it is different for different interpretations, but the physics is the same.
The second problem is, basically, “If objects are described by wavefunctions, how is it that you detect them in single places?” This is the “localization” business that Neil keeps banging on about, and he seems to think that it’s some sort of unbeatable argument against Many-Worlds. The problem is, if it’s an unbeatable argument against Many-Worlds, it’s an unbeatable argument against any interpretation. No interpretation that I’m aware of has an airtight explanation of this aspect of quantum measurement.
The thing is, this part of the problem isn’t that much of a problem. Or, to put it more bluntly, it’s only a problem if, like Neil, you think that the wavefunction is a physical wave like a ripple on a pool of water.
Wavefunctions aren’t water waves, though. They’re probability waves (or the square root of probability waves, to be a tiny bit more precise). They don’t behave like water waves, and they don’t need to behave like water waves.
A colleague likes to use the lottery as an analogy for quantum measurement. Before the measurement is made, you have a distributed probability function– lots of people have tickets, and they each have some probability of holding the winning ticket. At the instant the ping-pong balls pop up, that probability distribution “collapses,” and millions of people are found to be holding worthless scraps of paper, and one person is the winner.
In fact, I would argue that if you really want to harp on this, you have exactly the same problem in classical physics. If you’re being responsible about the job of predicting the result of a classical particle trajectory, you have to talk about it in terms of probability distributions.
If I’ve set up a ball-throwing robot to entertain the dog, the landing place of the ball is uncertain. There will be slight variations in the force with which the robot tosses the ball, and air currents in the room, and so on. All I can really predict is a probability of the ball landing at a given point– I can calculate that it’s most likely to be at one particular position, but there will be some range around that point where I wouldn’t be too surprised to find the ball.
When the ball hits, though, it hits in only one place. So, what happened to the rest of the distribution? It’s not worth worrying about, because it was only ever a probability distribution. If we repeat the experiment over and over, we can expect to trace out the whole distribution, but one shot will only land at one place, because that’s the way the world works.
It’s the same thing with quantum particles. A single electron sent through a double-slit apparatus will appear at one and only one place on a detector screen, because that’s the way the world works. The extended wavefunction that exists before the measurement describes the probability of finding the electron at any given point when you finally make the measurement. That’s all. There’s nothing all that mystical about the disappearance of the rest of the wavefunction, any more than there is about the disappearance of the rest of the probability distribution for the thrown tennis ball.
“Yeah, but in the classical case, you can imagine keeping track of all the various influences on the ball’s flight, all the way along. In the ideal case, you would be able to predict with certainty where it will land, every time.” True enough. That brings us to the third problem, which is, basically, “Why is it that quantum mechanics only describes probabilities, not specific outcomes?”
I don’t have a good answer for that. Nobody does. Quantum measurements are inherently probabilistic in a way that classical measurements are not. You can, in principle, predict the outcome of any given classical process to arbitrary precision– all it takes is a more careful measurement of the initial conditions. It might involve tracking the flapping wings of butterflies in the Amazon, but in principle you can correctly predict the outcome, given enough information.
Quantum physics isn’t like that. There is no way, even in principle, to predict exactly where an electron will hit the screen after passing through a double-slit apparatus. All you can predict is the wavefunction, which gives you a probability distribution. The outcome of an individual measurement is inherently and inescapably random, and nothing you can do will change that.
That’s my take on quantum measurement. If you feel the need to uncork a 1,500-word rant about how wrong I am, go nuts. But only in this comment thread– if it shows up anywhere else, expect lossy compression.
Heh – thanks Chad, I appreciate you throwing me a really nice, very crunchy bone that most bloggers wouldn’t do. You’re not overreacting. You are being very generous, seriously, to provide a specific forum. I expect others to be interested as well, how could they not be? Also, I will therefore be correspondingly respectful – and this time, briefer! BTW, I agreed with you about free will.
Points, IMHO: First, why isn’t the WF just like throwing a ball around, that you don’t know the trajectory until it hits? Why not just an expectation of classical probability? Because of the interference. If you throw a ball towards a two slits, the pattern is just two patches of hits. Since we get “fringes” after many hits, therefore “realists” (not SUAC) say a “wave” must have passed through the two slits – otherwise, how could you end up with that pattern? So sure, the WF is a “probability wave” but like I keep saying: it only becomes a “probability” wave because the mysterious collapse process makes it such from the amplitudes of a presumptive “real” wave.
But then, if a “wave” goes both slits “at the same time” and progresses towards the screen, how does it “turn into” the little hits on the screen? My question is the same of course that’s been asked so many times. And I say, along with Roger Penrose for much the same reasons, that decoherence doesn’t solve this. First, if you do keep interference, you still have hits there – even if a decohereing “environment” is important, we are getting “hits” in tiny intervals (maybe 10^-8 s) on a screen that could be many km across and examined much later. But if you do muddy up the “wave” phases it logically just leads to … muddying up the wave phases. Jumbling them around doesn’t get you from “waves” to “particles.” It doesn’t create localizations or “statistics” unless the dreaded, inexplicable “reduction” comes in to force the hits that “statistics” are composed of – not the other way around. Just because you don’t “see the interference” doesn’t mean the wave magically turns into particles (before it even hits the detector? When does it “convert”?)
I say, the phase relations are just covered up. If the emission process should produce “waves” it should take some special action to furl the jumbled/incoherent waves into a little space, even if can’t see the phase relations. Note that incoherent waves still focus like waves (how would “particles” go through a lens?), still make diffraction around an edge – albeit not with nicely formed fringes. Not also I can combine waves that don’t interfere for other reasons, like x and y polarized, and still get results such as diagonal polarization etc.
I agree with this: “The problem is, if it’s an unbeatable argument against Many-Worlds, it’s an unbeatable argument against any interpretation. No interpretation that I’m aware of has an airtight explanation of this aspect of quantum measurement.”
That’s exactly right! There just isn’t an explanation that makes any sense, that’s the kind of universe it looks that we have. Attempted explanations that try to end run that instead of facing it don’t help. That’s why brilliant thinkers like von Neumann played around with “consciousness collapse” etc. It’s just weird … Give up classical physics, maybe give up classical logic and scientific “sensibility” as well.
PS: Folks, please, check my blog! I have a measurement-venting thread there too! A person faux-arrogant enough to argue like this all the time and call himself “tyrannogenius” must be interesting …
So, what happened to the rest of the [ball throwing robot’s] distribution? It’s not worth worrying about, because it was only ever a probability distribution. If we repeat the experiment over and over, we can expect to trace out the whole distribution, but one shot will only land at one place, because that’s the way the world works.
What I’m trying to wrap my mind around is how this point interfaces with the answer to Question One. The probability distribution of quantum mechanics is more substantive than the classical one – you can get interference with QPDs, but not with CPDs. So apparently *something* has changed post measurement (e.g. in the classic two-slit experiment), because your interference pattern is now trashed.
So my hang-up is not so much “where did the rest of the probability distribution go when the measurement comes out “left slit” or “right slit””, but “What is it about the detection of position which causes the particle to become “100% left slit” or “100% right slit”, with no “70% left slit/30% right slit”?”
Or more generally, “What causes the collapse?” We can only ever measure/interact with quantum mechanical objects with other quantum mechanical objects. The detector isn’t really classical, it’s quantum mechanical. Why don’t we get a superposition of a detector in a “50% left/50% right” state?
According to the answer to Question One, it’s decoherence. I’m just not very clear on how decoherence applies to the left slit/right slit measurement case.
Chad, I love the irony of your trying to confine Quantum Measurement to one location.
RM, we can’t just throw away the possible landing spots of the quantum “ball” because a “realist” has to imagine an actual wave front going out to a wide distribution in space. That we know from the weird combination of interference and particles: the pattern of what classically would be “intensity” from wave interference is shown as frequency of hits. The widely spread wave must suddenly shrink up when a detection occurs because all the energy or mass is “there”. Decoherence can’t really handle that any more than other ideas (see later to EK.) I don’t know why location can’t be bifurcated for a particle, but with atomic absorption the entire quantum of energy is needed to raise an electron up to the next level. This must happen even with detectors very far apart. A realist must find something for the extended “wave” to do, it can’t still be all around since now we know it is only “here” – but there is no rational answer how to do that. I don’t even think “objective collapse” a la Roger Penrose can do it – you still have to round up those distant spreads of the WF.
Here’s an interesting experiment about slits: suppose I shine horizontal pol. light at two slits. At or near (it doesn’t even matter whether before or after light passes the slits) only one of the slits s2 I put a half-wave plate that turns the pol. angle there to vertical pol. Now light from the slits “doesn’t interfere” since they are orthogonal states, but they still do *combine* to form different patterns of diagonal and circular polarization. I can still actually find repeating “fringes” of these patterns, even though they are of same intensity, so the waves still reach the screen according to path delay from the slits – yet ironically, I could detect instead for H or V polarization. That would seem to show “which slit” the photon went through (tempted to say, “if V it must have been s2) but no, they still aren’t like particles because both H and V always still add up together. (If the photons really went through only one or the other, the pattern would be different.)
I’d like to see a simulation of what decoherence supporters think the WF does when measured, shown as a distribution of amplitude (like a weather radar map on rainfall). What would it show? If I emit a photon, it should show an expanding shell, then if that hits a detector – what? What do you make your colors do? If the shell can’t instantly vanish, then it shrinks towards the “correct” detector at the speed of light, like emission in reverse? But no, because someone else could intercept it. It shrinks at whatever speed it needs to, to cover the distance during the brief detector absorption time? How does it “know” to do that either? Detection is an absurd process.
I admit, I couldn’t show it either. But I don’t care anymore because I think the world isn’t objectively real anyway – it’s a put-up job, it’s like “The Matrix.” Look at what we’re asked to swallow – a structureless entity will break apart at a random time, with no internal process to mark time – preposterous for anything “real”. Take it as you will.
PS visit the thread at my spot http://tyrannogenius.blogspot.com/ if you want to talk more there.
Emory Kimbrough, are you the magician I get on Google? How appropriate! BTW I am not sure of your point about confining quantum measurement to one location – the detection event basically is in one location once it happens. Hence maybe you mean that the quantum entity can’t be confined that way before a measurement because it needs (in a realist view) to be “available” to the detectors it might set off, however widely separated they are. Hence, it must shrink up suddenly when one of the detectors makes a hit (well, not quite – what if the detector is unreliable and the particle “isn’t really there” after all!) That collapse is “absurd” and can’t be corralled IMHO by decoherence arguments.
I’d like to see a simulation of what decoherence supporters think the WF does when measured, shown as a distribution of amplitude (like a weather radar map on rainfall). What would it show? If I emit a photon, it should show an expanding shell, then if that hits a detector – what? What do you make your colors do? If the shell can’t instantly vanish, then it shrinks towards the “correct” detector at the speed of light, like emission in reverse? But no, because someone else could intercept it. It shrinks at whatever speed it needs to, to cover the distance during the brief detector absorption time? How does it “know” to do that either? Detection is an absurd process.
I think that you’re really, really hung up on the position basis. In how I typically think of such things, the “wavefunction” is the overlap of the state vector of the system with the position basis,
integral < x |psi > dx and has no physical significance in terms of it “being” anywhere or “filling” space. When the detector interacts with the photon, it’s state vector is rotated into one eigenstate in some basis appropriate to whatever is being measured, in a probabilistic way. This rotation may take finite time (say a plank time or whatever), or it may be instantaneous. If it takes finite time, then you may or may not see a shrinking of some graph of the wavefunction. If it’s instantaneous, it just disappears. If lorentz violations bother you, then you’re asking the wrong question, because QM with wavefunctions and schodinger equations is not built in any kind of relativistic or local way, and you would need to ask the question in the context of RQFT.
That’s how I find myself thinking about it, and yes, I do take Dirac-style matrix mechanics very literally. I always thought the wave-particle duality thing was a silly thing to go on about, because they’re not waves, they’re not particles, they’re quanta. They behave like quanta. All the time. Not waves tuesdays and particles wednesdays.
My two cents, FWIW, YMMV etc etc etc
BB, you seem to take the mathematical model as a sort of art object for it’s own sake, but I’m interested in what a “realist” has to imagine is “really out there” if the moon is going to exist even if we’re not looking. We know the WF is “out there” because it can be recombined later for interference, yet it can still be detected at a small spot. BTW if it vanishes quicker than light-speed causality, we have a problem of which frame of reference it is calibrated to. And like I said before, QMOs are “both” but the aspects in play at given times change over (like, while its spreading out but then it’s detected) and have to be honored and made intelligible in their contexts, if you want to be intelligible.
Note that it isn’t just a matter of how to imagine the shrinking up of the WF: the WF wasn’t supposed to collapse anyway, but the effects on various potential detectors were supposed to continue to evolve without favoring one over the other (you know, Schrodinger’s Cat and such, refresh yourself.) The early to middle quantum thinkers like von Neumann were very sharp and candid people who appreciated the true challenging nature of this paradox. Decoherence ideas do not IMHO solve this problem.
To the sentence “If the world obeys quantum rules, why does everything we see look so classical?”, I would respond : “Look at things quantum-mechanically and you will see that everything looks so quantum-mechanical.”
On the Principles of Quantum Mechanics page on wikiversity, an example is given for a bunch of nails:
“Imagine a bunch of ordinary nails thrown through the air. Each nail will have its own velocity, some will collide, some will cross space unaffected, most will gain spinning motion, each with its own angular velocity, with its own rotation axes. This is just a quantum system. In a quantum system, the elements are represented by arrows, mathematically we call them vectors. A spinning nail has observational properties attached to it (position, translational and angular velocity,…), the same for the vector representing it. The concept vector + observational properties is called a state vector. This is the core of quantum physics.”
Arjen, I know what you’re getting at but remember that those vectors (their squares, actually) represent “chance of detection” and getting over that hurdle is the big mess. It’s like suddenly cramming a parachute back into the pack.
BB, you seem to take the mathematical model as a sort of art object for it’s own sake, but I’m interested in what a “realist” has to imagine is “really out there” if the moon is going to exist even if we’re not looking.
A “realist,” in the physics sense of the word, would be using Bohmian mechanics, and not talking about wavefunctions at all. Of course, the non-local “quantum potential” is pretty darn weird, in its own way.
Other than that, you’re just way too hung up on the idea of wavefunctions as real, physical waves, like ripples in a water tank. They’re not. Period, end of sentence.
That’s the beginning and end of your whole problem, right there. Of course, this has been pointed out five or six times already, with no apparent effect, so I doubt saying it one more time will help.
Go read Bohm. That’ll keep you busy for a while.
Chad, I don’t understand your point about using Bohmian mechanics and not talking about wavefunctions. In the Undivided Universe, Bohm uses extensively wavefunctions to describe his ontological interpretation.
Neil, yes, measurements on particles represented by vectors/arrows |psi> is a matter of probability. The chance of detection may be deduced directly from phenomological analysis of scattering between those ‘rotating nails’, i.e. analysis of cross sections of the projections of those nails on the symmetry axis between them. And when the periodic motion of both nails is steered by the same ‘Bohmian’ pilot wave, this just gives a term proportional to |psi>^2. It should not be too hard to get rid of this calculation.
Bohm pilot waves etc. may be a clever attempt to marry wave and particle pictures simultaneously and “for real” but it is a minority position – I suspect, for good reason. I don’t agree that it is the only “realist” position. In general, “realism” means that you think there is really something out there in the world, it isn’t just us making up conceptual schemes as Idealists to let us predict future experiences. That’s an abstract definition, it isn’t about what seems to make sense as being real. As for “wave functions”: I got my basic perspective and skepticism from the classic QM thinkers and Roger Penrose at present. The way things act, if seems like they “are” waves in flight, and then become particle-like when the start interacting. That may be absurd. If one thinks such traits can’t be “real” then maybe Bohmian mechanics is an appealing alternative, but what if the universe is “real” but “absurd”?
I don’t think we can figure it out. I am not even a realist but a dreamscaper however creepy that sounds. Right, dreamscape not landscape. What is real, is something that interrelates and sort-of-orders “phenomena.” It doesn’t correspond to a particular objective configuration of “substance” (even broadly defined) in space-time, whether literal wavefunction, or particles guided by waves, whatever.
As for Bohmian mechanics as such: I just don’t see how the “real particles” could be consistently described and avoid getting into trouble while zinging about as little nuggets. An electron, maybe a nugget “in flight” but what about a photon? What in the world would it be like “as” a particle in flight? How big? Scattering cross section? Even with a PW to guide it, wouldn’t traveling-particle light come into mischief while burrowing through matter, maybe bouncing off a nucleus etc despite a larger “wavelength”? It should be diverted more by rough surfaces, etc. than a wave could be. Wouldn’t it have to be emitted in a particular direction from an atom, giving the atom an objective kick in the other direction and voiding the symmetry of the dipole field mode? Maybe I could go on. IMHO, it is gross – the equivalent of the clunky “gyroscopic aether” of the last gasp of 19th century substantive aether theory, just before relativity blew that away.
BTW Arjen, the “nails” in the description are surely the equivalent of vectors in a field, no more inherently particulate in space than the classical EM field also represented by vectors. As for probabilities: again, there wouldn’t be a “statistics” and particulate aspect unless collapse imposed that to begin with on otherwise waves (whether coherent or incoherent), not the other way around. Collapse acts upon the squared moduli, they have no “lumps” in them to start with.
PS I can’t blame anyone for finding difficulty in getting a handle on all this but wanting to try. So, I don’t expect “explanation” from me or anyone else just frank appreciation that it’s tough breaks out there.
Neil, there is a substantial difference between ‘nail’-vectors and EM-field vectors. Nails (or needles) can be seen as hard matter, while EM-field vectors are just field functions dependent on spatial charge distribution or motion. It is possible to perform experiments on discrete needle particles, cf. for example Hundreds of collisions between two hard needles. In such experiments, phase, frequency and relative spinning axes directions determine a multitude of different modes which one can relate to what happens in quantum physics, with photons, electrons, quarks and other elementary particles.
Arjen, it seems you are confusing a clever “modeling” technique with thinking that there really are needle-shaped thingies out there bouncing around – what are you saying? I’ve never heard of any particle being considered “needle-like”, and pilot-wave theories don’t either. Being needle-shaped wouldn’t spread the particles to the extent needed for later interference anyway. IMHO it’s a cute theory for folks like Democritus or early 19th century chemists at best.
BTW, another problem: what kind of orbits would electrons have if PW was real? Actual circular orbits, making atoms planar instead of essentially spherically symmetrical as we know they are? Or, some weird and contrived special pattern of actual motion, instead of the consistent and genuinely distributed standing wave pattern of electron clouds? What a Rube Goldberg contraption PW would have to be IMHO.
~~~~~~~~~~~~~~~~~~~~~~~
Schrodinger’s Cat sez, why can I no haz cheezeburger and eat it 2? You still don’t know!
Thank you, Neil, for this straight answer. Actually, I’m not believing anything about what is really out there. What I’m saying is that it is worthwhile to consider photons, electrons, quarks,… as needle-shaped subnanoscopic nuggets because the principles of quantum mechanics follow straightforwardly from the analysis of the motions and interactions of such needles. It also has some pedagogical value, because it makes teaching QM much easier.
I’ve also never heard of anyone considering particles needle-shaped. It is a personal viewpoint that allows me to consider QM more intuitive than classical mechanics.
I don’t understand the point on spreading the particles to the extent needed for later interference. When you have a rapidly rotating needle crossing a cloud of slowly rotating needles. The highly energetic needle creates a shockwave in the cloud. The shockwave passes through the slits and steers the phase and path of the rapidly rotating needle at the other side of the slits –> Pilot-wave and interference at the detecting screen.
I plotted a simple ‘electron’ mode, with the help of the derived solution to the Dirac equation in my QM and Observation on Macroscopic Arrows paper. Its ‘orbit’ is far from planar, it spreads out over a whole spherical cloud, around a center. Actually, it is not an orbit at all, it’s just a spinning particle that may be represented by a vector. IMHO, such a model would have helped folks around 1910-1920 to dismiss planetary-like electronic orbit models, through which weirdness sneakely entered quantum mechanics.
The second problem is, basically, “If objects are described by wavefunctions, how is it that you detect them in single places?”
I think you’re making this sound more mysterious than it really is, at least if I understand the question correctly. Once you acknowledge that decoherence makes things look effectively classical, the only question seems to be why classical observables are related to localized objects. And I think the reason for this is just that interactions are local: to the best that we know, the world we live in is described by local quantum field theory (ok, I admit, quantum gravity probably fuzzes out that locality, but that doesn’t matter for anything we’re able to measure). So decoherence gives you classical-looking states by piling up the effects of lots of little interactions, and because these interactions are described by a local Lagrangian, every interaction is happening somewhere. So the wavefunction splits in branches each of which corresponds to states like “this thing is here, that thing is there, ….”
I think you’re making this sound more mysterious than it really is, at least if I understand the question correctly. Once you acknowledge that decoherence makes things look effectively classical, the only question seems to be why classical observables are related to localized objects.
I agree with pretty much everything you say, and you’ve said more or less what I was trying to get at. I’m not sure that throwing around terms like “local Lagrangian” will clear it up for non-physicists, though…
Once you acknowledge that decoherence makes things look effectively classical,… but that’s not something I would acknowledge. The decoherence argument I’ve heard refers to patterns, “statistics” being different for incoherent waves (or what “were incoherent waves”) than for coherent waves – and then tries to say that’s what makes things “look classical.” Well sure, once they have collapsed to allow statistics in the first place, they look classical because “statistics” mean being particles in particular places or states. But it’s explaining and comprehending how the collapse/making-of-statistics happens first, not the implications of various patterns from all those collapses, that is the foundational problem of “collapse of the wave function.” That’s why I said decoherence came across as a circular argument.
Consider too that we can have a coherent version of the MZ interferometer, so why worry so much about de-coherence. We know the wave must have split to take both widely separated paths since we can get interference later. The MZ does not have to be balanced to get all A-channel clicks, so we can arrange for a “which detector” problem. Sure, there is a click in A or B and then the photon is localized, but why does the chain of superpositions end there, why A and not B etc? Those detectors can be far from each other too (big distance from recombiner if we want) and I don’t see any interaction between the detectors that matters, any way a “local Lagrangian” can reach out from one detector and pull all that spread out wave together at once. Talking about the observer/s doesn’t help much since I can come later and check for readings and see a sequence, it isn’t just something weird that happens inside me each time. If you think I’m not visualizing it right, then you tell me “what’s out there” and what it does. Like I said, set up your own sort of animation of wave or wave-particle density showing the actual distribution you think the electrons, or photons, have over time – show the interference from slits, but show the hit on detectors too and then tell me there really isn’t something that is spread out and then suddenly isn’t.
Folks, please remember – it would be a different story if I just had my own cranky interpretation of QM and was fighting “orthodoxy” – like a guy with his own offbeat “aether” theory alternative to Einstein’s relativity. But, I’m not – all that I’ve said always was the orthodoxy, the normal way to make the point until “decoherence” offered to lead people astray with the seductive charm of pretending to explain (?) the perplexing issue of collapse of the WF. I’ve ironically been defending the long-orthodox, classic (not -al) way of explaining the WF: why it must be a wave and not just “ignorance” about where a particle is, because of interference an things like atomic stability. Why it has to spread out (if particle, from Fourier of momentum spread; if photon, from Huygens’.) Why it has to “collapse” (because when a detection happens in one place, the chance it could be detected must go away everywhere else), and so on. I don’t think the old hands were out-of-it fuddies, and I don’t think Roger Penrose’s similar complaints are flabby. The attraction to decoherence ironically reminds me of the misleading appeal of Wittgenstein’s diversion of philosophy into semantic doubletalk.
PS, I just remembered that in order to talk of coherent versus incoherent, one has to imagine waves having been involved first anyway – so they should have stayed waves until the inexplicable collapse dynamics localized the particles. Again, simply: collapse creates statistics, not the other way around (how could it?)
The decoherence argument I’ve heard refers to patterns, “statistics” being different for incoherent waves (or what “were incoherent waves”) than for coherent waves – and then tries to say that’s what makes things “look classical.” Well sure, once they have collapsed to allow statistics in the first place, they look classical because “statistics” mean being particles in particular places or states. But it’s explaining and comprehending how the collapse/making-of-statistics happens first, not the implications of various patterns from all those collapses, that is the foundational problem of “collapse of the wave function.” That’s why I said decoherence came across as a circular argument.
The problem is, what you’re saying makes no sense. The reason the discussions always come down to statistics is that we are constrained to looking at the results of actual experiments that are physically possible. We can only measure the things that are measurable with our measurement apparatus, and you have yet to suggest anything remotely like a physically reasonable experiment somebody could do to see results that do not agree with those predicted by decoherence.
And you’re not nobly defending some Golden Age orthodoxy that has fallen into disrepute thanks to the calumnies of the evil decoherence priesthood (“There is no God but God, and Zurek is his prophet…”). From what I’ve read of the actual beliefs of the Copenhagen crowd, they would’ve been just as disgusted with whatever it is you’re doing. Heisenberg shaded toward being a radical anti-realist, holding that it made no sense to ask what an electron was doing between measurements.
Look, I’m a little cranky because of the continuing lack of electricity in Chateau Steelypips, but this has been going on for a long time, with absolutely no progress. You pay lip service to acknowledging the arguments made by other people (mentioning Mach-Zehnder interferometers occasionally), but show not the slightest glimmer of comprehension of the actual points being made.
I’m sick of this. If you’ve really got a killer argument here, propose some sensible experiment that can actually be done that would settle the question. Concisely, if you can manage it.
If you can’t come up with such an experiment, go plague someone else’s comment section.
Chad, anyone still interested: Yes, “results” are the meat of it and below I make a stab at that. Yet “interpretation” can’t be avoided in QM, as well as the framing proposed in a theory or flavor of a theory. Some of the greats did indeed wave off (heh) “interpreting” what nature does behind closed doors and expressed something like SUAC, yet many others worried about why the presumed original superpositions didn’t maintain both (or more) states. They did often think about what was “out there.” You can’t even calculate what ought to come out of MZ channels unless you imagine the first BS made for two separate wave trains, and take it from there. If there was nothing to perplex about, the “Cat” issue wouldn’t have been bandied around like a problem by plenty of others long before me. I criticized “decoherence” theory ideas (from reading various accounts, not just yours – say DT for short) because of two main complaints. They weren’t just my own, I read similar from Roger Penrose in SOTM. As for “progress” it’s not a one-way street: I too feel like nothing gets across, maybe this will help but you have to listen too.
The first complaint was, the idea that muddying up the coherence of superposed and intermingled quantum states (typically, by the environment) would somehow explain the picking out of one state and the discarding of the other one/s in the course of the measurement/interaction process. DT referred to the statistics being different for coherent states (if you prefer, than “waves”) than for incoherent states, and implies the latter sort of statistics somehow explains or gets us to the “classical world” and the disappearance of the “lost state” in the final result.
I said no, because the original collapse question was, what intervenes in the superpositions/waves to cause “statistics” to come out of it? Sure, “that’s what we get” but if you’re going to “explain” why, you can’t take it for granted to start with. That’s a fundamental logical point. I don’t see how to make it any more clear or fundamental. Muddying up phases of wave, etc, just makes for more complex and messy wave patterns and superpositions. OK, they don’t interfere, but doesn’t imply an intrinsic mechanism for a favored state to win a sort of struggle. The collapse that picks favored versus discarded states from the combination, must be from something more than just their being jumbled together in a messy way.
Empirical support for my critique? I’ll try, but as a critic – not a provider of an alternative theory other than QM’s prior mysteriousness – it is not as incumbent on me to show evidence. If DT can show how that superposed states must settle down according to actual rules about influences rather than perplexing definitions of “measure” and “observer,” then it can make a contribution. Even then it isn’t really saying how nature “gets rid of” the lost alternative states. Ironically, DT must then not just be an interpretation of existing QM since old QM didn’t actually calculate “reduction” as a process, in a systematic way – hence the SC paradox as given.
Anyway, here’s a possible example I mentioned earlier. In it, we still have the sudden detection event and discarding of the other wave/state even with the cleanest of coherent arrangements. The MZ can be set up with such a phase shift as to allow a specified proportion of A:B channel output which is not 50/50. It shows that interference happens since the result is not the 50/50 we’d get if particles bounced off the splitters, yet there is still not a guaranteed result. So let’s say 64:36. Well, the very coherent wave of one photon goes through the MZ, and exits the recombiner, and we don’t know which detector DA or DB will be set off. If we imagine “reality” at all, we know there should be a WF coming out of A and one coming out of B. Maybe the detectors are very far away. So I am watching near B and then say “click” happens there. Those detectors are far apart, they and the waves reaching them were coherent and don’t interact with each other, etc. You might say, the “decoherence” is what happens inside the detectors or my mind etc. but I don’t see any progress in throwing it up into that end-stage muddle. I experience noting the “B” click long before I get to the vicinity of A or talking to an assistant at A etc. I think something very quickly “happens” at B and other times at A – that’s when and where the “collapse” occurs, in a tiny interval so I see a flash or click right then and there.
The other complaint I have, is the collusion of DT and MWI. I do not find any coherence; pardon the pun, in the apologies for MWI. I know we start with superposed states, and “I” see one of them go away and leave the other. That makes my “measurement”. The idea that muddled, incoherent mixing of these states can of its own accord run one of them off into “another universe” or whatever it is, again makes no sense. It is justified by appeal to sentiments of distaste for the other ideas of collapse – but that distaste is not Nature’s obligation to service. There is a direct contradiction IMHO as I’ve said, to save space one may click here: http://tyrannogenius.blogspot.com/2008/11/open-forum-dish-against-or-defend-many.html. (But quick summary: Having two worlds doesn’t allow e.g. 70/30 chance of branching. But infinite sets can’t be compared, so defenders are stuck with some arbitrary large finite number of branches to get net effective probability.) So here’s an empirical challenge: can MWI supporters find evidence that those other worlds exist? Actually bring back some rocks maybe, or an actual notebook that’s just like one here up until to the point where the other detector showed the click …
States “can’t interfere” all the time, it doesn’t do weird things to them. It never before was considered an excuse for the two not to continue coexisting “here” in a messy conjunction, in the one universe that we know of.. Just look at H and V polarization, which simply maintain their own identity. I hear the phrase, the essence of QM is interference – but the essence of QM is “statistics” being plucked out of waves. It is the very fact of that, not the almost banal observation that coherent waves produce a different statistics than incoherent ones. Sure they do, but only if something else makes them produce statistics at all.
Finally, since this thread was set up as a sort of ghetto/playground for me, it seems pointless and a bit unfair to pull the plug from my not living up to your expectations. You like and agree with DT, I don’t, and neither of us wants to back down. So there’s no more inherent reason for me to change than you. I believe it is still “the quantum measurement paradox.”
The first complaint was, the idea that muddying up the coherence of superposed and intermingled quantum states (typically, by the environment) would somehow explain the picking out of one state and the discarding of the other one/s in the course of the measurement/interaction process. DT referred to the statistics being different for coherent states (if you prefer, than “waves”) than for incoherent states, and implies the latter sort of statistics somehow explains or gets us to the “classical world” and the disappearance of the “lost state” in the final result.
That is not the claim made regarding decoherence, and never has been. In fact, if you read the post up above, you will find that there is no point at which I claim that decoherence explains the “picking out of one state and the discarding of the other one/s
.”
This is why I am so frustrated and annoyed about this discussion. I have stated my position several times, at some length, and you continue to attribute to me (and others) positions which I do not hold, and have never advocated. You’re locked into some particular view of what your opponents are saying, and will not be budged from it.
This is why I slapped you down in a previous thread (when you were playing this bullshit game while I was out of town, and not able to respond), this is why I have set up this thread, and this is why I am frustrated and annoyed enough to use phrases like “bullshit game.”
This last comment was just shy of 1200 words– in fact, it’s longer than the post to which it is appended. And nothing in it suggests that you read what I actually wrote before uncorking a stock rant.
Anyway, here’s a possible example I mentioned earlier. In it, we still have the sudden detection event and discarding of the other wave/state even with the cleanest of coherent arrangements. The MZ can be set up with such a phase shift as to allow a specified proportion of A:B channel output which is not 50/50. It shows that interference happens since the result is not the 50/50 we’d get if particles bounced off the splitters, yet there is still not a guaranteed result. So let’s say 64:36. Well, the very coherent wave of one photon goes through the MZ, and exits the recombiner, and we don’t know which detector DA or DB will be set off. If we imagine “reality” at all, we know there should be a WF coming out of A and one coming out of B. Maybe the detectors are very far away. So I am watching near B and then say “click” happens there. Those detectors are far apart, they and the waves reaching them were coherent and don’t interact with each other, etc. You might say, the “decoherence” is what happens inside the detectors or my mind etc. but I don’t see any progress in throwing it up into that end-stage muddle. I experience noting the “B” click long before I get to the vicinity of A or talking to an assistant at A etc. I think something very quickly “happens” at B and other times at A – that’s when and where the “collapse” occurs, in a tiny interval so I see a flash or click right then and there.
I don’t see what point you’re trying to make, here. Yes, there is some probability of detecting the particle at one detector or the other, and when you do, the probability of detecting it at the other detector goes to zero. That’s how single-particle detection works. You can do the same thing with classical particles– it’s called “pachinko.”
I don’t see anything here that presents a devastating critique of anything. Is the problem supposed to be the instantaneous change? That’s an unavoidable part of the theory– quantum mechanics is non-local, as we know from Bell’s theorem and the many experiments testing it.
The other complaint I have, is the collusion of DT and MWI. I do not find any coherence; pardon the pun, in the apologies for MWI. I know we start with superposed states, and “I” see one of them go away and leave the other. That makes my “measurement”. The idea that muddled, incoherent mixing of these states can of its own accord run one of them off into “another universe” or whatever it is, again makes no sense. It is justified by appeal to sentiments of distaste for the other ideas of collapse – but that distaste is not Nature’s obligation to service. There is a direct contradiction IMHO as I’ve said, to save space one may click here: http://tyrannogenius.blogspot.com/2008/11/open-forum-dish-against-or-defend-many.html. (But quick summary: Having two worlds doesn’t allow e.g. 70/30 chance of branching. But infinite sets can’t be compared, so defenders are stuck with some arbitrary large finite number of branches to get net effective probability.) So here’s an empirical challenge: can MWI supporters find evidence that those other worlds exist? Actually bring back some rocks maybe, or an actual notebook that’s just like one here up until to the point where the other detector showed the click …
Again, you’re working from a misconception regarding Many-Worlds, based on pop-science accounts of it. And again, you do not appear to have read any of the several posts I’ve written about this topic.
For about the sixth time, the “many worlds” of Many-Worlds are not real universes. When a measurement is made, that does not, in fact, split the initial wavefunction into two different wavefunctions occupying different universes.
The whole point of the theory is that there is no collapse. There is one wavefunction for the universe before the measurement, and there is one wavefunction for the universe after the measurement. The components of the wavefunction get a little more complicated– they now include the state of the detector and the observer entangled with the state of the original particle– but all of the pieces of the wavefunction that were there before the measurement are there after the measurement.
The misleading “separate universes” business is because those different pieces of the wavefunction no longer interfere with one another, which is the primary observable consequence of a system being in a superposition state. They no longer show interference effects because of decoherence, due to random and fluctuating interactions with a larger environment that cause phase shifts that wipe out the interference.
It is unreasonable to ask people to “find evidence that those other worlds exist” because there are no literal other worlds. It’s a nonsensical request.
Finally, since this thread was set up as a sort of ghetto/playground for me, it seems pointless and a bit unfair to pull the plug from my not living up to your expectations.
I’m threatening to pull the plug because you’re not arguing in good faith. Every comment you post is functionally identical to the first comment you posted, and shows only the most superficial signs of attempting to understand the arguments for the other side. You’re attempting to “win” the argument through Proof by Attrition, repeating the same statements over and over until your opponents get fed up and leave.
That’s not a tactic of someone interested in rational debate, that’s a tactic of Internet trolls. And this is a blog, not a bridge– I have no interest in hosting trolls.
Perhaps decoherence is easier to understand if we look at it in the density matrix formulation rather than in the wavefunction formulation. It has been a while, so pardon any mistakes.
So, instead of the single particle wavefunction |Ψ>, look at the density matrix |Ψ><Ψ|. Expand out in some basis b_i (e.g., position eigenstates, or momentum eigenstates), to get something like Sum A_ij |b_i>
What looked coherent in preview has been utterly mangled when posted. Sigh!
To try again, consider the density matrix |Ψ><&Psi|. Expand it in some basis {b_i}, to get something like Sum A_ij |b_i><b_j|. It is the off-diagonal terms that make this density matrix different from a classical probability distribution. The physical processes behind decoherence kill the off-diagonal terms, and the resulting effective density matrix is of the form Sum C_i |b_i><b_i| which is a classical probabiliy distribution into the b_i.
The slight mystery is this is not invariant – take a state and decohere it the momentum basis and the result is different than if we did so in the position basis. Here the underlying physics that results in decoherence is important and I believe, because of locality of interactions, would typically be the position basis.
How about things that have no position basis – e.g., spin? How does it decohere? What is the natural basis in which to decohere it. I am not sure. Perhaps Chad, if he’s paying attention to this ancient thread, can answer.