I’ve written before about the problem of having in-between views on controversial subjects in blogdom. This is something that also comes up in Jessica’s excellent entry on online culture, and has been scientifically demonstrated in political contexts.
I’m somewhat bemused, then, to see the same thing happen in a physics context. A while back, I got an email asking about quantum foundations that read in part:
I’m very keen to understand why you and Andrew Thomas reject [the Many-Worlds Interpretation of QM].
I’d be very happy if you’d take a few minutes to try to describe why you think MWI is wrong and also what you think is more likely to be true and why.
(I added the link, for context.)
This is kind of amusing to me, as I have previously taken some flak here for being excessively pro-MWI, in the view of some readers. I just can’t win.
My real view is, alas, kind of wishy-washy: I’m agnostic about quantum interpretations, mostly because as far as we know, they’re all meta-theories, not proper scientific theories. There is no experimental test known that clearly favors one interpretation over another, so which one you like is ultimately a question of taste. They’re kind of fun to talk about, but absent a way to distinguish between them, they’re not more than that.
Another way of saying that would be to say that I subscribe to the “Shur Up and Calculate” interpretation (a phrase that apparently originates with David Mermin in a 1989 column in Physics Today, so please don’t attribute it to Feynman). The concrete predictions of quantum physics are tested to something like 14 decimal places, and the choice of interpretation doesn’t affect any of those results, so just focus on what works, and leave the meta-theorizing to philosophers.
The clearest statement I have about what Many-Worlds is really about is this old post on MWI and decoherence, which can be found in a much cuter canine version in How to Teach Physics to Your Dog (available wherever books are sold, hint hint). That said, here are a few general things I will say about it:
- If you think about what the MWI actually says, particularly in the modern versions involving decoherence, it’s pretty elegant and not at all unreasonable. A lot of the arguments against it are based on confused misreadings of what it’s about, and end up constructing elaborate refutations of things that are not actually part of the theory.
- Sadly, a lot of the pro-MWI stuff out there is also pretty bad, putting forth some pretty outlandish claims, or offering hand-waving explanations of the theory that make it more confusing than it needs to be. Some of this material is taken way more seriously than it ought to be, in my opinion.
- In the end, the modern interpretation boils down to having a wavefunction with multiple components that are prevented from influencing one another through random and unmeasured interactions with a fluctuating environment. These are not parallel universes in the Star Trek sense, with entirely separate near-duplicate copies of every material object, but rather superpositions of states of a single set of material objects. Far too much is made of the “parallel universe” angle.
- Most of the objections to the theory are fundamentally just aesthetic objections– all those extra wavefunction components seem like an awful lot of overhead to be carrying around; other people object to the fact that the theory contains branches describing really improbable events. The people making these arguments get very worked up about them, but I don’t find them particularly convincing.
- The most serious real problem for the theory has to do with figuring out the origins of the measurement probabilities observed by people within the theory (that is, given that all possible outcomes occur somewhere, why do we see some events occurring with higher probability than others). I’m not entirely sure I understand this objection, either, but it’s regarded as important by many people who have thought deeply about this.
- Nothing you can do will allow you to shift your perception from one branch of the wavefunction to another, no matter how much you want to be in a more favorable universe, or how much time you spend meditating, or what drugs you take. Nothing. Anybody who claims otherwise is a charlatan and a crank and should not under any circumstances be given any money.
I’ll also say that my opinion of the MWI improved dramatically when I started reading more about it in order to write the book, so I would recommend that to anyone who is interested in the subject. Sadly, though, most of the popular treatments of it are pretty bad, and even some of the things you’ll find in journals are dodgy, so it’s hard to make concrete recommendations (other than my own book, of course…).
Feynman said something which inspired me to be more agnostic about quantum interpretations. He referred to three different ways of calculating motion: using force equations, field equations, and the principle of least action. All three work equally well at first, but force equations don’t transfer well to modern physics because it’s nonlocal. So the moral of the story is that different interpretations may make the same experimental predictions, they tend to suggest different directions for future theories. If a theorist wants to cast a wide net, s/he should keep multiple interpretations in mind.
“The concrete predictions of quantum physics are tested to something like 14 decimal places”. That would be QED, I think, in a very specific regime of experiments. People working with 2-dimensional Hilbert space models don’t get that kind of accuracy, and nor do people working with QCD models. What’s the standard that you aim for in AMO? I don’t think you have to get 14 digit accuracy to your model to publish, right? I’d like to see that as the subject of a separate post, perhaps.
It’s never quite clear to me “what works”. Reading Philosophy of Physics helps less than I wish it might with the question, insofar as Philosophers are often either too close to the Physics to have a different perspective or too engaged with the History or Sociology of Physicists. On the other hand, identifying what exactly “what works” means is, I suppose, a hard problem.
I think your issues with the multiple not-quite-right and sometimes obviously-not-right presentations of MWI are pretty sensible, but that every interpretation can manage only an OK story for QM is perhaps exactly the problem.
And the Winner is… Many-Worlds! (the many-worlds interpretations wins outright given the current state of evidence):
http://lesswrong.com/lw/r8/and_the_winner_is_manyworlds/
Here is more, Eliezer contends many-worlds is obviously correct:
http://bloggingheads.tv/diavlogs/21857?in=29:28&out=38:11
Could you perhaps elaborate on just how this is a distinction with a difference? If all you are saying is that the different “branches” are essentially independent due to decoherence then one could hardly disagree but I sense you are saying more than this.
“The concrete predictions of quantum physics are tested to something like 14 decimal places”. That would be QED, I think, in a very specific regime of experiments.
Well, yeah, because QED corrections become important well before that level. If you’re going to exclude QED predictions from consideration, then you’re limited to something around a tenth or a hundredth of a percent.
People working with 2-dimensional Hilbert space models don’t get that kind of accuracy, and nor do people working with QCD models. What’s the standard that you aim for in AMO?
It depends on the AMO experiment. If you’re doing something like spectroscopy of the Lamb shift, you need to be at the ppm level at least, ppb or better if you want to compete with the single-electron measurements of g-2. Atomic clock type experiments are looking for accuracy on the part in 10^16 level, though you could argue that they’re not really comparing to theory, in that I doubt anyone has done an ab initio calculation of the Cs ground-state splitting to that level, but rather, people use the measurements as an input to theory.
If you’re doing a proof-of-principle BEC experiment, you can get away with accuracy of a few percent. That’s the level my own work has been at, because I’m not temperamentally suited to chasing systematics at the ppb level.
Well, as usual I can’t resist adding my two-cents when it comes to quantum foundational topics.
“I’m agnostic about quantum interpretations, mostly because as far as we know, they’re all meta-theories, not proper scientific theories. There is no experimental test known that clearly favors one interpretation over another, so which one you like is ultimately a question of taste. They’re kind of fun to talk about, but absent a way to distinguish between them, they’re not more than that.”
You know, I couldn’t disagree more about this. In fact, I find it truly bizarre that quantum theory is pretty much the ONLY scientific theory where people do not think the the interpretation of the theory is an integral part of the theory itself. At least, I can’t think of any other examples.
Whilst it is true that an interpretation must reproduce the confirmed predictions of QM, they can differ quite a bit outside of that. The obvious example is spontaneous collapse theories, but there is also nonequilibrium Bohmian mechanics. Some may argue that these are different theories rather than different interpretations, but I think the dividing line between different theories and interpretations is rather blurry. It is not guaranteed that the interpretations will all agree when we are outside the realms of established physics, e.g. in quantum gravity.
The other thing that I think interpretations do for you is that they give different intuitions for how to proceed theoretically on certain problems. For example, the approach one takes to the emergence of classicality or to quantum chaos depends heavily on whether you view the state vector as an epistemic state (state of knowledge) or an ontic state (state of reality). This is one of the key issues that interpretations differ on.
In other words, interpretations CAN make a very real difference to how one does physics, and on controversial issues they probably SHOULD make a difference.
“Another way of saying that would be to say that I subscribe to the “Shur Up and Calculate” interpretation”
Especially if you your calculations involve a lot of representation theory (http://en.wikipedia.org/wiki/Schur%27s_lemma).
“If you think about what the MWI actually says, particularly in the modern versions involving decoherence, it’s pretty elegant and not at all unreasonable.”
I actually agree with this, so long as one does not confuse “not at all unreasonable” with “definitely right”, which some experts are wont to do.
“A lot of the arguments against it are based on confused misreadings of what it’s about, and end up constructing elaborate refutations of things that are not actually part of the theory. Sadly, a lot of the pro-MWI stuff out there is also pretty bad, putting forth some pretty outlandish claims, or offering hand-waving explanations of the theory that make it more confusing than it needs to be. Some of this material is taken way more seriously than it ought to be, in my opinion.”
In my opinion, the reason for this is that there are a vastly larger number of physicists who think they are qualified to pontificate on interpretations than there are physicists who have seriously studied the existing literature and seriously thought about the subject. This could be remedied by actually teaching the subject properly in physics departments. To do so, they would have to hire physicists who are capable of doing so, i.e. all physics departments should offer me a faculty position immediately!
“These are not parallel universes in the Star Trek sense, with entirely separate near-duplicate copies of every material object, but rather superpositions of states of a single set of material objects. Far too much is made of the “parallel universe” angle.”
I disagree strongly with this. MWI reasoning implies that the other universes must have EXACTLY the same status as our universe, since there is nothing in the physical equations to distinguish them, and MWI is all about taking those equations as a deadly serious literal description of reality. Thus, if MWI is correct, there are other people out there reading other versions of your blog post with just as much of a claim to existence as we have.
If your objection is that these universes are not of the “everything that is logically possible happens in one of them” type, or that you will not be able to meet the evil anti-rationalist version of Chad who sells homeopathic cures based on the fluctuations of the quantum energy field, then I agree, but it is inconsistent to argue that they have a different status from the world that we actually see around us.
“Most of the objections to the theory are fundamentally just aesthetic objections”
I don’t think the best objections are of this sort, but it is true that the best objections have not been well-articulated in print.
For my own part, I have a few problems with the basic assumptions that lead to the interpretations:
1. MWI advocates assert that the best interpretation should be the simplest possible reading of the equations of the theory. I don’t accept that it should necessarily be simple. Rather, I would prefer that it was correct.
2. MWI advocates assert that they are simply taking the equations of the theory at face value. There are a couple of problems with this. The first is that it is based on one particular formalism amongst many, i.e. Schroedinger equation in Hilbert space rather than path integrals, Wigner functions, etc. Who is to say which formalism gives the best intuition as to the nature of reality? In some of them, the state-vector does not appear as a fundamental entity, so why should it necessarily be used to determine the nature of reality. Secondly, the MWI reasoning is biased towards Schroedinger type reasoning. In a Heisenberg-inspired approach, the wave-vector is a secondary object, arising from a modification of the nature of the dynamical variables. Taking these new dynamical variables to be the arbiters of reality would have an equal claim to be “reading the interpretation” from the equations, but taking that seriously would lead to a quantum logical interpretation instead of the MWI. Thus, you can argue for quantum logic in almost exactly the same way as people argue for MWI. Thus, I believe you can’t ever read the interpretation from the equations in an unambiguous way.
3. I find that an epistemic interpretation of the quantum state has more explanatory power in a large number of areas of physics, e.g. quantum information, quantum chaos, quantum measurement theory. In MWI, the state vector is ontic, so the explanatory power of the epistemic approach is puzzling and seems to require explanation, or at least it needs to be admitted that it is a huge coincidence.
“Sadly, though, most of the popular treatments of it are pretty bad, and even some of the things you’ll find in journals are dodgy, so it’s hard to make concrete recommendations (other than my own book, of course…).”
From the scientific literature side of things, I would say that David Wallace’s papers on the subject are required reading. If you are bored of reading then you can watch him talk about it instead here (http://pirsa.org/index.php?p=speaker&name=David_Wallace).
The popular accounts all suck.
My comment has been held up by the spam filter, so I’ll just say that everyone except me is definitely and absolutely wrong about everything, but you’ll have to wait to find out why.
I’m agnostic about quantum interpretations, mostly because as far as we know, they’re all meta-theories, not proper scientific theories. There is no experimental test known that clearly favors one interpretation over another, so which one you like is ultimately a question of taste.
I don’t think this is quite right, is it? Some of the arguments really are purely interpretational, but one question is physical: whether quantum evolution is truly unitary (as in MWI or any approach that views decoherence as fundamentally important in establishing a classical-looking world) or not (as in approaches that view “wave-function collapse” as a real physical process, not just a convenient approximation). I’m very much a shut-up-and-calculate sort of person, but I very very strongly favor the former (QM is really unitary, there is no such thing as “true” wavefunction collapse) point of view and think it is the only reasonable way to understand the theory. My impression is that most people who’ve given the issue much thought think evolution is really unitary and decoherence is crucial, but I not infrequently talk to people who have a very old-fashioned textbook understanding of “wavefunction collapse” as a real physical process.
What are your current thoughts on de Broglie-Bohm Pilot Wave theory, Chad? I’m sure you give your opinion elsewhere on your blog but we all have the right to change ours minds so I’m interested in your current views.
Also, would you like to make a bet on how soon Neil Bates will join this discussion? I suppose we should establish an over/under first.
At the end of the day my own thoughts are that all attempts at quantum interpretation are time better spent elsewhere (for example … laser cooling! … or quantum hall effect). QI is an important subject, no question, but it feels like focusing too much on a single tree while a whole forest beckons to be explored.
When or if we have a viable theory of Quantum Gravity, I believe that theory will explain QI. QI is a “why” question, and we can’t answer it until we have better data and the better theories such data spawn, that is to say more “how” answers.
Btw, which is the best popular book out there on MWI? Something by David Deustch, the Father of Qubit Theory? Seth Lloyd is also into quantum computing, and I believe he disagrees.
Finally, congrats on taking the middle road, Chad. That’s darn refreshing for a change in the increasingly polarized extremist community, and I look forward to exploring your links. Before reading this page, I was dead set against MWI. Now I’m wondering …
Well, as usual I can’t resist adding my two-cents when it comes to quantum foundational topics.
Hey, I appreciate having commentary from people with actual knowledge, as opposed to my half-informed blathering…
Whilst it is true that an interpretation must reproduce the confirmed predictions of QM, they can differ quite a bit outside of that. The obvious example is spontaneous collapse theories, but there is also nonequilibrium Bohmian mechanics. Some may argue that these are different theories rather than different interpretations, but I think the dividing line between different theories and interpretations is rather blurry. It is not guaranteed that the interpretations will all agree when we are outside the realms of established physics, e.g. in quantum gravity.
I agree that the various interpretations are radically different in how they talk about what’s going on, but unless that leads to testable predictions, I have a hard time getting invested in it. I agree that the language people use to talk about what’s going on does have an affect on the problems they choose to work on, and how they choose to approach them, but in the end, everybody seems to end up working on the same handful of practically relevant problems.
The “no experimental test known” in my original text is a hedge against the possibility of future divergence of predictions. I suppose you could argue that there’s already one example of a divergent prediction, namely Penrose’s notion that quantum gravity automatically collapses the wavefunction of anything bigger than a Planck mass, but that’s still a good ways away from being testable.
Me: “These are not parallel universes in the Star Trek sense, with entirely separate near-duplicate copies of every material object, but rather superpositions of states of a single set of material objects. Far too much is made of the “parallel universe” angle.”
Matt: I disagree strongly with this. MWI reasoning implies that the other universes must have EXACTLY the same status as our universe, since there is nothing in the physical equations to distinguish them, and MWI is all about taking those equations as a deadly serious literal description of reality. Thus, if MWI is correct, there are other people out there reading other versions of your blog post with just as much of a claim to existence as we have.
If your objection is that these universes are not of the “everything that is logically possible happens in one of them” type, or that you will not be able to meet the evil anti-rationalist version of Chad who sells homeopathic cures based on the fluctuations of the quantum energy field, then I agree, but it is inconsistent to argue that they have a different status from the world that we actually see around us.
I’m arguing against something simpler and stupider, I think, more in the “where does all the extra matter come from?” sort of line. Some explanations of the theory give the strong impression that every time an electron spin is detected an entirely new universe comes into existence, with a whole new electron, new measuring apparatus and experimenters, etc. That is, where you originally had one universe’s worth of stuff, you now have two. Which leads to nonsense like people from one universe moving into another, including extreme cases like the “carbon from a different universe has a different electronic structure” foolishness from Stephenson’s otherwise enjoyable Anathem, among others.
Any one of the wavefunction branches in a MWI theory behaves as its own self-contained universe, but ultimately, they’re all talking about the same set of stuff. We’re not really doubling the mass-energy of the multiverse every time we measure an electron spin.
From the scientific literature side of things, I would say that David Wallace’s papers on the subject are required reading. If you are bored of reading then you can watch him talk about it instead here (http://pirsa.org/index.php?p=speaker&name=David_Wallace).
The popular accounts all suck.
You forgot to qualify that by excepting my book… See if I fish your next comment out of the spam filter, you ungrateful lout…
I’ll look up Wallace’s stuff, though at the moment, I’m reading up on some other branches of physics. Do you have any recommendation for a good basic intro to Bohmian mechanics? I keep getting tripped up by the fact that I know very little about that particular approach, and it would be nice to have a clearer idea of what’s going on there.
Since a proposed experiment of mine (introduce decoherence into one leg of MZI, then recombine BS2 output into third BS3; see link) about decoherence caused much (IMHO unwarranted) controversy here, I want to review just what I really claimed and the implications of it and the arguments about it.
1. OK, I slapped up something at first with inadequate editing and textual/graphical support, as well as made it too much a bait to get reaction. (Well, that did shoot me up top in searches.) My apologies on all counts, but let’s move on.
2. I was defending traditional QM (wave functions stay superposed and spread out until final measurement event no matter what happens in between) against the radical decoherence interpretation, not challenging traditional QM. The radical DI as defended by W. Zurek et al, as best I can parse, implies that decoherence actually generates effective “mixtures” somehow and somewhere in between initiation and final measurements. IOW, D accounts for why we don’t see macroscopic superpositions – and even that it somehow solves or seems to solve the measurement problem. No, it doesn’t.
I also appreciate that Chad really isn’t or even wasn’t one of them, but his MZI explanation (in his linked post) of how it was supposed to “work” fit right into my concept of how to test it experimentally. Of course, I mean test the idea that decoherence can lead to mixtures. Decoherence certainly happens and has consequences. The debate is over just what it leads to. Specifically, does it just lead to a messier superposition pattern (as I say) or does it really make for mixtures and our not having macro superpositions?
3. The first confusion about unequal reflectivities in BS1 didn’t help, but finally got across. We all ended up agreeing that I did not make a math mistake after all, and that my claimed results would occur under orthodox QM. However, Chad didn’t agree that they made any significant point. Well, again: you can verify that if decoherence produces a “mixture” (or FAPP, “effective”) of photons at BS2 (each one out one path or the other, but not superposition from both channels), then the final statistics at BS3 simply are not the same. I said, the results would follow traditional QM. That means, modeling each photon as a superposition until reaching the final detectors past BS3. Please, work it out. Different results distinguish what otherwise would presumably just be “interpretations” with no empirical consequences. (Analogy: Bell explaining how entanglement would produce certain results otherwise not possible, which surprised some who figured there was not way to tell.) So my real problem was: getting people to appreciate that decoherence doesn’t initiate selection of a state, like “collapse” in advance of final measurement.
4. There are other things wrong with DI/MWI anyway. Chad (and I figure by now, you can start talking to me?): as for your question about why there’s a problem in MWI of finding correct statistics in one world, I think it boils down to how many splits you’d need to show fractional chances. It can’t be 50/50 into “two worlds” for an 11% chance of A and 89% chance of B, since “my” chance of hitting either world is 50/50 (?) So do you have infinite number of splits, with Hilbert-Hotel style issues of getting proportionalities out of comparing infinite sets? Or, the universe picks “one million worlds” and hopes we don’t check more carefully? See?
b. I’ve brought up, if “measurement” isn’t special then wouldn’t the first BS in an MZI split a photon into two (or more) “worlds”? – but that would prevent the later interference. The MWI-ers don’t make clear just what happens at any of these junctures anyway, but that should be a real problem.
c. The case for DI is a classic circular argument. Proponents take the density matrix, in which statistics are already present and taken “for granted”, then argue that decoherence creates statistics like that of mixtures. Well sure, once you “already” have something to make statistics in the first place! Otherwise, decoherence would just make for a messy wavefunction waiting for “something” to collapse it into one of the available states. To me, this is incredible and comes across like an SNL parody of bad analytical philosophers (would they ever do one.) See for example NP Landsman.
I hope that helps, and invite people to take another, thoughtful look.
“Do you have any recommendation for a good basic intro to Bohmian mechanics?”
Making recommendations is a tough call. As soon as Rob Spekkens and John Sipe get around to publishing the textbook on quantum foundations and interpretations that they have been writing for years it will get much easier. Personally, I had the good fortune to be sitting in Perimeter Institute for a few years at a time when almost every expert in the foundations of QM passed through, so I learned more by osmosis than by reading.
On that note, you could do worse than watching Shelly Goldstein’s lectures from the 2005 interpretations course:
http://pirsa.org/05020002/
http://pirsa.org/05020005/
They are, however, high on rhetoric and low on equations. If you don’t have the tolerance for that then you could download any one of the umpteen papers by Goldstein and collaborators that provide introductions to the theory (see http://www.mathematik.uni-muenchen.de/~bohmmech/BohmHome/bmstartE.htm for starters).
There is also a book if you want a more in depth treatment (http://www.springer.com/physics/quantum+physics/book/978-3-540-89343-1).
These resources all take a particular approach to Bohmian mechanics that is closer to what de Broglie originally proposed than to what you will find in Bohm’s papers. Personally, I think this approach is the best motivated and easiest to understand. If you look at other literature you will see things like “quantum potentials” and things that look like Hamilton-Jacobi equations. If you want to understand this stuff as well then I’d recommend starting with Bohm’s original two papers on the subject, but it’s not really necessary if you just want a conceptual overview.
I should also mention that I side with Valentini on the issue of quantum equilibrium, so I’d recommend looking at a couple of his papers as well, but that’s a bit beyond learning the basics.
“You forgot to qualify that by excepting my book…”
I do like your book, but I still think that there is no popular book that covers the current state of play of all the major interpretations in an accurate and unbiased way. This is the sense in which I meant that all popular accounts suck. Your book doesn’t suck, but it is not the best possible introduction to interpretations. It just has different aims.
I find it interesting to see all the debate about “interpretations”. Shut up and measure! Does QM not explicitly tell us that all we can know about nature is what we measure, that reality is not a thing out there for us to know if we (or anything else, humans are NOT special here). I don’t need MWI to have decoherence. These arguments can point in new directions, but lets not lose the lesson that Bell taught us! Its either non-local, or there is no “reality” until we measure. Relativity seems to say non-locality is out so…..
Now Bell found a loophole in von Neumanns proof, and perhaps there is one in Bells. But stop looking in the 32nd decimal place for some small effect. Fer christs sake by then quantum gravity must be taken into account as well as other things.
QM says if you start here, the probability for getting to another place in state space is given by the usual rule. Intermediate measurements/interactions (same thing really) restricts the number of intermediate paths.
The lamest thing I’ve read was a piece in Sci. Am. on counterfactual stuff, written by a philosopher, and a writer who writes novels with scientific themes. Not even wrong IMHO.
MWI is an ontological interpretation. To say it better, according to MWI the wavefunction is something ‘real’, physical. Now it would be interesting to know how MWI can explain the Renninger negative-measurement experiment http://www.npl.washington.edu/TI/TI_40.html#4.1
“Does QM not explicitly tell us that all we can know about nature is what we measure, that reality is not a thing out there for us to know if we (or anything else, humans are NOT special here).”
No it does not. QM tells us the probabilities for outcomes of any experiment we do. That is all. It doesn’t say anything one way or the other about whether there is a deeper level of reality. The rest is just a result of Copehagen brainwashing.
“These arguments can point in new directions, but lets not lose the lesson that Bell taught us! Its either non-local, or there is no “reality” until we measure. Relativity seems to say non-locality is out so…..”
If you think that is the lesson that Bell taught us then you have clearly never read any of Bell’s papers. He argued that the measurement problem was a very real physical problem that needs to be solved and that orthodox quantum theory is ambiguous (see his ‘Against Measurement’ paper). After his work on Bell inequalities, he became a strong proponent of Bohmian mechanics and spontaneous collapse theories.
I’m not saying I agree with Bell’s take on things, but I do think that being a scientist and also strongly anti-realist is a bit paradoxical. What are our scientific theories intended to describe if not reality?
The preferable interpretation of QM is the statistical one (also called ensemble interpretation), it agrees with all experiments while not making any additional unsupported claims.
MWI is by far the worst abomination of them all. It only pretends to solve the problems, in reality it just obscures them with a ton of ill defined concepts and hand waving.
Matt – QM and SR have taught us what we CAN know, and what we can’t. THAT is at least one of the lessons of Bell. Collapse of the wavefunction only occurs for idealized measurements, there are other types of measurements too. The density matrix is not a real thing, it is our description of what we do know. The wave function can’t be real as it only exists for closed systems.
I am OK with the seeming result that reality is a slippery beast that does not exist independent of our interaction with it, nor do I think that my conciousness is special. The moon IS there when you close your eyes, the sun is scattering photons off it….
It may very well be that quantum gravity or other mechanisms causes decoherence/localization/collapse. That is why I would say we should at least know on the various ways of thinking about it.
I don’t claim the measurement problem is solved, but in the context of information theory, and realizing that measurement is just another interaction, its not as nuts as some folks make it out to be.
Hilbert space is a big place.
Thanks for the article, Chad. I’m certainly not a fan of the MWI, but I do think some clarification is needed in this area and this is very useful in that respect.
You say: “These are not parallel universes in the Star Trek sense, with entirely separate near-duplicate copies of every material object, but rather superpositions of states of a single set of material objects. Far too much is made of the “parallel universe” angle.” I understand where you’re coming from when you say that, in that the modern MWI is more usually described in terms of grand superposition wavefunction of the universe, and drags in ideas in from decoherence to attempt to explain why the different branches do not interfere with each other. However, if we have different branches – different possible outcomes – which can never interfere with each other then what we have here to all intents and purposes is a completely different, parallel universe. Just like Matt Leifer said in his comment, “MWI reasoning implies that the other universes must have EXACTLY the same status as our universe”. It is still the unavoidable truth that the MWI leads to precisely a “Star Trek”-type multiverse (the Wikipedia article on the MWI is quite clear on this as well). We are firmly into Star Trek country – the land of the multiverse.
I suspect the wavefunction description has been favoured recently because of the bad press associated with multiverse theories, but some MWI proponents (such as David Deutsch in his excellent book “The Fabric of Reality”) is honest enough to be quite clear that MWI = parallel universes – every time a measurement is made, the universe effectively splits. The mechanism of this split is essentially irrelevant – as that Wikipedia article says “Provided the theory is linear with respect to the wavefunction, the exact form of the quantum dynamics modelled, be it the non-relativistic Schrödinger equation, relativistic quantum field theory or some form of quantum gravity or string theory, does not alter the validity of MWI since MWI is a metatheory applicable to all linear quantum theories.” So really I don’t care about if the particular version of the MWI claims to invoke decoherence with the environment or any other particular method, the MWI = the MWI = the MWI = parallel universes. I’m totally with Matt Leifer on this one.
Your interesting response to Matt’s comment – “Where does all the extra matter come from?” – is very interesting as it could be viewed as a criticism of the MWI I have never heard before: Where does all the matter come from indeed? However, this is not a problem if we are dealing with completely disconnected parallel universes – there is no reason to suggest that conservation laws need apply over parallel universes. So I’m kind of defending the MWI here over that particular criticism!
I make a lot of varying criticisms of the MWI on my page on quantum decoherence (thanks for the link to my site in your article), my main criticism being that in experiments in decoherence we can now view the interference terms as a reality, for example, an electric current going in two directions at once, or an atom being in two places at once, as long as they are isolated from the environment. If the interference terms had really escaped to a parallel universe then we should never be able to observe them both as physical reality in this universe. I think the MWI is outdated in this respect.
Actually, now I’ve had a chance to think about this, I think I’m probably wrong in some of what I just posted. It’s certainly very useful to think of the MWI in terms of objects in superposition states, rather than “universes splitting”, as you suggest. In which case it would be possible to find atoms being in two places at once, etc. So I’ll have to modify my webpage over that point.
My only criticism of the MWI now only that it does not consider the obvious reason why interference states disappear – the random phase of the interference components cancelling out. I see you considered this well in your earlier article. This cancellation can reduce the superposition to a single state (as shown in the animated JavaScript density matrix example at the end of my article), though I see you suggest that the superposition remains. I would disagree – the interference components genuinely cancel before appearing in physical reality.
So I still don’t rate the MWI. But thanks for a couple of very useful explanatory articles.
My only criticism of the MWI now only that it does not consider the obvious reason why interference states disappear – the random phase of the interference components cancelling out. I see you considered this well in your earlier article. This cancellation can reduce the superposition to a single state (as shown in the animated JavaScript density matrix example at the end of my article), though I see you suggest that the superposition remains. I would disagree – the interference components genuinely cancel before appearing in physical reality.
I’m in an airport at the moment, so I haven’t read your article as carefully as I might in more comfortable circumstances, but I don’t think we actually disagree on this. On a macroscopic level, where you need to average over many interactions, the off-diagonal terms of the density matrix do disappear, and you see a classical-looking s=distribution.
But that explicitly involves averaging over many states. For an individual wavefunction describing a single realization of the experiment, there is no averaging. Which means that the different branches all interfere with one another to produce a single result that, in some sense, depends on all of the interactions at once. If you could somehow keep track of the interactions for many repetitions of the experiment, you could reconstruct the relevant interference effects, but this isn’t remotely possible in any real experiment. The density matrix is a sort of bookkeeping shortcut to deal with the fact that we can’t really keep track of everything, and that we will necessarily be averaging over many slightly different realizations of the experiment.
If you take the core idea of Many-Worlds at face value– that is, that the wavefunction always and everywhere evolves according to the Schrodinger equation– this sort of undetected interference must be taking place all the time. The business of “splitting off” parallel universes is, in some sense, a bookkeeping shortcut– all those different branches of the wavefunction containing different measurement results and the observers who perceive them are really part of a single wavefunction, and finding the outcome of any single measurement must involve the interaction of all those terms. The random and fluctuating nature of the interactions with the environment, though, makes it impossible to detect any influence of those other pieces.
At least, that’s how I look at it these days. This is, of course, my own personal spin on Many-Worlds, but there are at least as many versions of the interpretation as there are physicists who have thought about it, so I figure that’s ok.
Hi Chad, I think I actually agree with everything you say (not entirely certain of your analysis in your third paragraph, but I don’t think I’d disagree). I certainly agree with the first couple of paragraphs of what you say – that the density matrix is a sort of “bookeeping shorcut” because it is effectively impossible to keep track of all the billions of individual entanglements. Yes, as a result decoherence is “effectively irreversible” – though not technically impossible (this is very similar to thermodynamics – entropy could theoretically decrease – it’s just very unlikely).
I actually think reality IS a form of “averaging”, averaging over billions of entanglements, a form of “blurry” reality emerges at the macroscopic level. It appears like everything is very real with clear-cut objects at our macrocopic level. It’s only when we look down to microscopic level that we find that things are not how they seem. So I think all of your “Many Worlds” blur together and average-out. And I think we seem to agree about this general idea. Like you, though, I’m not sure if this is what is meant by MWI nowadays.
Andrew Thomas, you mention that you believe reality is a form of “averaging” out “Many Worlds”.
Could you elaborate on what you meant by this?
Would this mean the other words do exist, do they cancel each other out by “averaging out” or something else?
I know David Deutsch is all about the other branches being real, infact I’ve heard he lives his life after how it will affect his doppelgangers…
Is this the same MWI you guys believe in?
Hi Peter, well, we know that objects “before we observe them” can apparently appear in more than one place at once (for example, a photon apparently going through two slits in the double-slit experiment), so we can describe “reality before observation” by a wavefunction – a mathematical function which does not actually tell us precisely where the photon is but just tells us the probability of where we willl find the photon. So if we consider the wavefunction as descring “reality before observation” then we have to conclude that an object is in a strange superposition state of being in more than one place at once before we observe it.
However, we of course do not find objects in more than one place at once in our everyday experience. So what makes the multi-valued wavefunction apparently collapse to a single point? Well, the latest theory of “decoherence” says that the apparent collapse of the wavefunction to a single point is only an illusion. What happens is that the various components of the wavefunction interact with the random, complicated components of the environment (for example, when the photon hits the screen), and as this random environment is not in a coherent phase with those “interference” compenents they eventually cancel-out (or average-out) to zero. This is all described on my page on quantum decoherence.
Hmmm, I’m still not 100% sure I phatom what you are trying to make me understand =P
So that means the other worlds “cancel out” due to the decoherence and we are left with one “real eigenstate”?
When they cancel out/average out, does that mean dissapear?
So we are left with 1 “real world” due to decoherence, not collapse?
Yet the other “parts” of the wavefunction doesn’t exist anymore because they cancelled out?
No answer?
I’ve read your whole site (before commenting here), and nowhere is my questions answered on that website.
Peter, the decoherence enthusiasts have no coherent (so to speak) answers to your challenges and questions, as I and others have explained elsewhere. It (in general, aside from what version Andrew promotes) is a misleading circular argument that proceeds like a caricature of Wittgenstein. The very existence of the statistical results is what needs explaining as per the collapse problem. But DI enthusiasts take statistics for granted and just say, statistics from incoherent waves are like classical statistics, so decoherence makes the situation classical (in some sense they vaguely intimate.) Not only that, they make the risible mistake of combining a series of experimental runs to pretend that causal issues of one case (which, despite pleas that we’re necessarily dealing wiht “ensembles”), is still a theoretical model problem. But a given instance can be grouped by choice with various other similar instances – it is not and cannot be an inherent feature of a given event.
Without collapse, however, we’d have all components of the superposition and no statistics at all. Given either coherent or incoherent waves, no statistics would occur without some extra intervention or feature of reality.
1. Coherent waves + collapse = “quantum statistics” which is just the patterns corresponding to squared amplitudes of the wave functions.
Coherent waves, but no collapse = continued Schroedinger evolution with no end predicted by the model.
2. Incoherent waves + collapse = “classical statistics” and the creation of mixtures (sometimes one thing, sometimes another – but not both at same time as in superposition.) But that’s only because after decoherence, the amplitudes change around from instance to instance. (It eliminates the third term in intensity of (A + B) = A^2 + B^2 +2AB cos theta.)
Incoherent waves, but no collapse = endlessly continued Schroedinger evolution again, just a messier version. And no, the different possibilities wouldn’t cancel out. Indeed, entire waveforms can’t cancel out in the net, or we could use that e.g. to violate conservation of energy. Interference just redistributes the amplitudes.
Without collapse – however disturbing it is and however much temptation to sweep under the rug – decoherence would have to just result in changing patterns of wave amplitude that stayed waves, never in “hits” that make statistical patterns. DI is a fallacy.
At my link, read about an experiment that would distinguish whether mixtures were literally produced by decoherence. BTW the OP argued against this proposal, under much confusion, but is not a DI enthusiast.
When they cancel out/average out, does that mean dissapear?
So we are left with 1 “real world” due to decoherence, not collapse?
Yet the other “parts” of the wavefunction doesn’t exist anymore because they cancelled out?
I was out of town last week, and have been catching up this week, hence the slow reply.
In the many-worlds picture, there is no “cancellation” in the sense that you’re thinking about– the wavefunction continues to contain pieces corresponding to each of the possible measurement outcomes. We only see one of the possible outcomes because we are part of that wavefunction– the piece of the wavefunction corresponding to a coin coming up “heads” includes a sub-part describing an observer who saw the coin come up heads, and the corresponding piece where the coin came up “tails” has an observer who saw tails, and so on.
We do not see any influence of those other universes because their influence is smeared out by the influence of random and fluctuating interactions with a larger environment. If we could keep track of all possible interactions between the coin and the rest of the universe, we would be able to map out an interference pattern showing the influence of other branches of the wavefunction. As this would require keeping track of every photon and air molecule in the path of a tossed coin, though, we can never realistically expect to see this happen. People have studied this process in small systems (single atoms, single ions, etc.), though, and it all works out mathematically.
The most compact description I have of the whole shebang is in this old post, which is also linked above.
Chad, as I noted before (and really, you don’t need to perpetually never refer or reply to me. Let’s be grownups. If I’m killfiled, I guess you’ll never see this anyway, but other commenters might find it interesting.): if the separate components of the wavefunction already included the separate observations at the end, then they wouldn’t interfere to start with. So why not “separate” at the first beamsplitter of an MZ interferometer, and then never show apparent interference to any observer keeping track later? There would be world 1 = takes bottom path, and world 2 = takes top path, all before reaching BS2. No interference, even with no decoherence.
I don’t see any reason why the “influence” of the other paths would be taken away by destroying interference patterns. Those patterns are something built up by many iterations and still depend on “something” to isolate and contract the wave function. No matter how disordered the wave pattern, then both “alive” and “dead” should just be there together (what the model predicts), regardless of how well we could interpret statistics after the fact. It’s a two-stage problem, and no amount of mixing it together in misleading procedures like the density function will make that go away. That’s also what Roger Penrose thinks, FWIW.
Thanks to both of you, but NeilB, I don’t buy collapse interpretations…
Chad Orzel:
It seems like Andrew Thomas is saying that the “other worlds” aren’t like ours, like they are small ripples, while our world is the huge wave where they originated from?
What about quantum darwinism, it seems to state there is really only one real world?
I would be very grateful if Andrew Thomas could explain to me what he means by that anology…
Hi Chad and Peter,
In the many-worlds picture, there is no “cancellation” in the sense that you’re thinking about– the wavefunction continues to contain pieces corresponding to each of the possible measurement outcomes.
Yes, that’s my understanding as well. However, I think the principle behind decoherence as proposed by Zeh was that there IS cancellation, effectively because of the random nature of the environment.
The Many Worlds scenario seems to deny the possibility of destructive interefence, which puzzles me. Surely if we have interference components with random phases then if we average over time then the interference components will indeed reduce to zero. This is explained extremely well in
The process is described very well in page 3 onwards of this paper by John Boccio, especially page 6:
Some thoughts on the collapse postulate
and also here:
Decoherence
As I have suggested before, Many Worlds was suggested in the 1950s when this process was not considered (I think Zeh first proposed it in the 1970s). It just seems to make sense to me. One macroscopic reality, but a blurring at the microscopic level which we do not ordinarily detect.
I think Chad’s earlier article was very close to what I am suggesting, though Chad would say that all the intereference components of a photon are always preserved, while I would say that they can be cancelled, which to my mind is what the mathematics of QM suggests.
It’s not a case of the “mass disappearing” – that the superposition components represent “mass” and they miraculously disappear somehow. I think we have to open our mind about that and realise that quantum reality is so peculiar that the wavefunction can only be described mathematically – we should not consider the interference components to in any way represent “hard reality” before the particle is observed. We have to consider “reality before observation” (i.e., the wavefunction) in purely mathematical terms. And that then allows for the possibility of cancellation of those terms.
We have to follow what the mathematics suggests, and give up intuitive notions of “superpositions representing reality which are always preserved”. That’s my take on it.
Just one more point, in Chad’s earlier article he stresses that “The photons always behave like waves”, but it is in fact the case that the photons are NEVER directly observed as waves in physical reality – they are only ever directly observed as points (e.g., when we detect them hitting a screen). If we perform an experiment like the double-slit experiment, Young’s interference experiment, then we seem an interference pattern (of points) and we INFER that the photons are waves, but they are never directly observed as waves. So the question of “Where does the wavy superposition parts of the photons go?” which seems to at the centre of Many Worlds is really not a problem as those wave elements are never directly observed anyway – they are only inferred. Photons only ever appear as points.
Until an object is observed, it must be only considered in its mathematical form (due to the weirdness of QM). Considering the wavy superposition parts of a photon to be real before observation is a huge mistake of Many Worlds.
Hello Andrew, and I was hoping you’d at least notice me …;-/
In any case: as I said and as is well known, interference can’t reduce any components to zero. They will just be redistributed. The MWI concept is based on an ill-defined notion that somehow, components of the WF are effectively separated from each other during measurement. No clear reason why is given AFAICT, it just “has to be” because we don’t *find* the superpositions and some people just can’t stand that the WF model doesn’t make realistic sense. It doesn’t make sense because we can’t logically account for what happens to the WF during an interaction that we actually observe.
As I and others note, MWI has many problems (over and above even explaining what supporters are even trying to say), such as why does the separation conveniently wait until a measurement occurs: if separation started right away (like at the first BS in a MZI), then we wouldn’t even get the interference in the final results.
Decoherence doesn’t help explain the *existence* of statistical results either. It just scrambles phases around, which are still – as Chad admitted earlier – quantum entities with actual amplitudes. So we end up with messier wave functions and more complex and varying amplitude distributions, still waiting for some inexplicable “something” to turn them into specific hits, localized events: the basis for the “statistics” we’re trying to explain.
Note that I proposed an experiment to show differential results if decoherence caused a mixture (as some DI enthusiasts propose) to come out of BS2 instead of a true superposition. I admit to poor original presentation at my blog, but the proposal was misunderstood and misrepresented here (even regarding the factual issues, but the latter were admitted correct by the OP later in the criticizing thread.) You can verify that a mixture output from BS2 would produce different results when recombined at BS3, than would a superposition from BS2. You read and commented, but it’s been clarified so you might want to revisit that.
BTW, per your question there: differential reflection at BS1 (like 70/30, not counting some absorption) doesn’t provide distinct which way info (it just changes the statistics) or keep the system from being a true superposition. In that case, for example, it changes the amplitudes for lower and upper states, to sqrt(0.7)|1>, sqrt(0.3)|2>; etc. This still causes interference at BS2, just not the same as with 50/50 (e.g., can’t null out one of the channels.) If decoherence really scrambled the photons into being discrete outputs from the BS2 channels, then the final pattern at BS3 would not be the same as if the output followed superposition rules all the way through until reaching detectors outside BS3. That’s the essence of the proposal.
Andrew, also:
Considering the wavy superposition parts of a photon to be real before observation is a huge mistake of Many Worlds.
Sorry, there’s no good alternative. If you don’t consider the wavy parts “real”, then what is causing interference later? If the WF isn’t real, then what is? Bohm attempted to guide “real particles” with pilot waves, but I and many others think that’s a clunky mess, also it doesn’t work well for photons anyway (just what kind of localized entity could they be? How about their coherence length, etc?)
Hi Neil, really sorry, I’m not ignoring you! I just don’t come here very often.
You say as is well known, interference can’t reduce any components to zero. They will just be redistributed. Well, I’d disagree with this. If the components are described mathematically (which they are under QM) then they can absolutely be reduced to zero. Like I said earlier, this is why we have to be so careful when we’re talking about “reality before observation”. You talk about “quantum entities with actual amplitudes” – which I’m not sure has any actual meaning! How can there be an entity before it is ever observed? We get into philosophical territory. Is there really any thing such as reality before observation? If so, then according to the peculiar complex-space nature of the wavefunction then it is unlike any commonsense reality. So we’ve really got to be very careful here.
Then you say “If you don’t consider the wavy parts “real”, then what is causing interference later?” and you say there is no alternative. Well, there clearly are alternatives. As we do not directly observe the wavefunction (which you assume is causing the interference pattern) maybe it does not exist at all – maybe photons just automatically hit the screen in an interference pattern because that just happens to be the way photons behave. That’s my point – we have no direct observation of the wavefunction at all – we only see points on the screen. So, again, we have to be really careful when we are talking about “reality before observation”.
Before those photons hit the screen, it is certainly philosophically very difficult to call them “real” at all – all we just have a form of QM mathematical reality which is very different from our everyday reality.
Andrew Thomas,
I think I get what you are getting at, but I want to clarify:
You are saying that before we observe something it doesn’t really have a “reality” to it?
Back to the old “if a tree falls and noone is around to observe it, does it make a sound? Or even fall at all?” kind of question.
I seriously doubt you believe reality doesn’t exist before we observe it, right?
Do you believe the position of a electron is indefinite until we observe it?
Isn’t that a collapse interpretation?
Well, the question of whether something exists is a very philosophical question.
The point I’m really making is that the Many Worlds interpretation seems to assume that there is a fairly hard reality before observation because it trys to explain that away by saying each branch of that hard reality forms a separate universe AFTER observation. This seems very dubious when, as I say, the very concept of “reality before observation” seems very tied up in philosophy and there might be no such thing as reality before observation. In which case all we are left with is one single reality after observation – and no place for Many Worlds.
As yo your point about the position of the electron before observation – again, that is all tied up in the philsophy of reality – something Many Worlds is happy to ignore.
Well, the question of whether something exists before observation is a very philosophical question.
The point I’m really making is that the Many Worlds interpretation seems to assume that there is a fairly hard reality before observation because it trys to explain that away by saying each branch of that hard reality forms a separate universe AFTER observation. This seems very dubious when, as I say, the very concept of “reality before observation” seems very tied up in philosophy and there might be no such thing as reality before observation. In which case all we are left with is one single reality after observation – and no place for Many Worlds.
As yo your point about the position of the electron before observation – again, that is all tied up in the philsophy of reality – something Many Worlds is happy to ignore.
Andrew, if there’s no reality before observation then what decides what ends up being observed? I’m still assuming you mean no definite “character” BO, rather than “nothing at all” which implies the world is a fantasy or like the Matrix – a simulation (even then, something is arranging what we’ll experience.) Actually I have some sympathy for such a view, in that we don’t have to believe the world is easily and definitely representable outside of our perceptions. Oh, something is going on to produce results but it may not be intelligible, like Copenhagenists say.
However, if there’s no definite reality BO then there’s no need to solve the collapse problem either – so decoherence is pointless as a “solution” to what is then not a problem. All that decoherence would do, is to be a more disorderly pattern in that “whatever it is” before observation and would not explain why there’s a definite result and not a superposition. The DI is worthless either way, whether you’re a realist (in which case it is fallacious and doesn’t answer the question) or if someone (like you) is not a realist – in which case DI is superfluous and misapplied.
Andrew: you are not a anti-realist? I hope you are not a solipsist 😛
Are you arguing for collapse interpretation ?
That “the lack of hard reality” is “reality in superposition” until we observe one outcome?
Isn’t that just copenhagen?
You have written quite extensively on decoherence, so I’m very confused when you say “there may not be a reality before we observe reality(oxymoron?) then we end up with one real world – somehow.”
Please clear this up, I’m guessing I am just misunderstanding you
Hi Neil, whether or not there is reality before observation is really a job for the philosophers! I don’t really want to get involved in that too deeply. But if there is reality before observation (I suppose there probably is) then it must be of a very different form to the reality we are used to (because the wavefunction is described using complex space – it’s really just maths).
I’m just really making the point that often the reality of the wavefunction is quoted as if it is orthodox physics, whereas the complete opposite is true – it’s quite controversial to assign any form of hard reality to the wavefunction – it is generally thought of as a purely mathematical tool. So this Many Worlds approach of treating the wavefunction as a “real” object – physical reality like a car or a chair (albeit in a superposition state) – is really not at all orthodox.
You say “if there’s no definite reality BO then there’s no need to solve the collapse problem either – so decoherence is pointless as a “solution” to what is then not a problem. Yes, you’re absolutely correct – if there’s no reality before observation then there’s no collapse of a non-existent wavefunction, and no Many Worlds, and none of those interpretations are valid. The wavefunction is truly then just a purely mathematical tool for describing the probability of finding a particle.
Hi Peter, yes, it’s basically what the Copenhagen interpretation says. Somehow the act of observation “brings reality into existence”. I’m not saying I agree with that, but that’s basically Copenhagen (though I think Copenhagen says “You shouldn’t ask questions about “reality before observation” because that is unscientific – the question is invalid. You should only ask questions about things we can observe and measure).
Thanks Andrew,
However MWI doesn’t postulate any collapse, yet we clearly see a real world.
So how do you assume this mathematical wavefunction becomes the real world you and I are in and consist of without invoking magical collapse?
Andrew, pardon some delay. I see you have a point about the general “philosophical” issues, however: I proposed an experiment (in my name link) that would actually have one result if decoherence produced a mixture at a certain stage of interaction, and another result if it did not. That’s an empirical distinction. I know there was plenty or argument bout it but in the end all agreed the results would be as I said, people just wouldn’t agree on their “implications.”
Hi Peter, well, yes, you’re right. But there’s nothing that “magical” about what you call “collapse of the wavefunction”. If there’s no “real” wavefunction, then there’s no real object which has to “collapse”. This amazing wavefunction would just not be a real thing at all – it just becomes a mathematical tool which gives us a probability of a certain outcome occuring when we take a measurement. Nothing to “collapse”. Just one reality, and a probability of it taking a certain form. Yeah, OK, if we consider MWI then all realities exist, but you’re still left with the conundrum of why certain universes are more prolific – why they are the most probably outcomes. MWI doesn’t explain the probability conundrum really.
I agree that MWI has problems with probability.
However now you are advocating a epistemological approach to all of QM.
How would that explain the double slit experiment, delayed choice etc?
Hi Peter, it’s not that I’m necessarily advocating that approach, I’m just making the point that it is controversial to give the wavefunction any form of “reality”. It’s important to make that clear as people talk about the reality of the wavefunction as though it was done and dusted – far from it.
As to the double-slit experiment, people look at the interference pattern obtained in that experiment and say “The photon went through both slits”, whereas of course they are only INFERRRING that that is the case – it has never been directly observed. All they have is points on a screen.
If you’re looking for an explanation, well, there are effectively an infinite number of interpretations as to why we find the photon points in that pattern, and each interpretation is valid. MWI is just one of them. You can take your pick. But for ALL of those interpretations, we can treat the wavefunction as a purely mathematical tool for finding the probability distribution of the photon points. We don’t have to say that the wavefunction is a real object. It can just be a useful mathematical guide.
But to say that the wavefunction is a real thing (and not just maths), and we have a superposition of reality, is controversial.
I definately agree with you about that.
However, if we do assume a ontological existence for the wavefunction, does that automatically translate into MWI when we involve decoherence?
This is my main question, and has been all along.
Or is there a way for a ontological wavefunction to produce only 1 real world due to decoherence ?
Can I ask a dumb question? How does MWI describe alpha-particle decay from a collection of radioactive atoms, due to quantum tunnelling? We see a random exponential decay with a decay time tau. How is the probabilistic nature of this explained by MWI?