When we left our story, we were stuck in the unfortunate position of living somewhere in a multiverse without any a priori way to figure out where we live. What might we do?
One thing we can do is let the dreaded anthropic principle rears its head. At its most basic essence, the anthropic principle is the statement that we exist. This is data, and we can draw conclusions from this data. The most famous examples of this are Hoyle’s prediction of a particular nuclear resonance based on the need for enough carbon in the universe for us to exist and Weinberg’s bound on the cosmological constant based on the existence of galaxies. Using our existence as data is completely uncontroversial. What is controversial is the underlying philosophy. Put simply, the existence of a multiverse may represent the end of one of the dreams of physics: that we can compute every physical quantity from first principles — as the solution to some mathematical equation, for example. If there is a multiverse and a distribution of parameters, then some things are just random and have no fundamental explanation.
In this context, one can take the anthropic principle to be an inversion of the usual data/inference model of science. Instead of our existence being data we use to draw inferences, we instead consider the need for observers as a theoretical basis for explaining data we have already collected.
The primary tendency is to use this as an explanation for many “coincidences” that occur in nature. Take the classic example of the orbit of the Earth. If you vary it too much, you find that life (as we know it) could not exist. You could argue that it is a incredible coincidence that we happen to live in a zone so remarkably amenable to life, but, of course, there are plenty of planets in the universe, and life only develops on those that can sustain it. There’s no point in trying to predict the radius of Earth’s orbit from first principles; we live here because we can live here.
The danger of this is that if we can’t actually observe the other universes, one may give up on understanding coincidences that actually have fundamental explanations. For example, as I understand it, the existence of Hoyle’s resonance follows from some details of nuclear physics. Would this have been found if everyone had instead ascribed it to some particular facet of the multiverse? Invocation of the anthropic principle in this manner represents the end of science. It could very well be correct, but how does one know when one should close a door?
Beyond this problematic sort of “retrodiction”, many people would like the anthropic principle to lead towards some semblance of predictivity. The key to this is the principle of mediocrity, sometimes called the Copernican principle. This goes beyond the anthropic principle (which is essentially tautological) and deep into the philosophical swamp. What it states is that we are not special. Given some broad class of, well, things of which we are a member, we should be an “average” member of that class. Needless to say, it’s quite hard to turn this into a precise statement.
Still, we do similar things every day. Let’s say that there is a drug that has side effects in 5% of users. What is the probability that if you were to take it, you would experience side effects? To answer this question, one needs to delve into what one means by probability. In the philosophy known as “frequentism”, one does or imagines doing an experiment a number of times, and the probability of something occurring is the percentage of times it occurs. We can apply that to this situation, and we see that the probability is either 100% or 0% — we don’t know which. There’s only one of you, after all, and you’ll either have side effects or you won’t.
A lot of people find this situation unsatisfactory. It seems reasonable that one could assign a value of 5% to the probability that you will experience side effects. To get to that conclusion, you can drop the frequentist philosophy of probability and replace it with the idea that probability expresses the degree of one’s belief in a proposition. Or, to make it more grounded, probability represents the odds that one would take to make a bet on the proposition. This is called Bayesian probability. You begin with the “prior probability”, in this case 5%, and as you learn more information, there are rules, “Bayes’s Theorem”, about how you adjust the odds. You start with the assumption that you are a generic person and give yourself a probability of 5%, and as you learn more, the number changes.
So, at last we’ve arrived at a situation where we might be able to assign some probabilities. I find this extremely odd, myself. While we might not be able to actually predict anything, we find ourself able to take bets. Has anyone called Vegas?
I’m not much of a gambler myself, but before you dismiss this entire situation as completely crazy and unscientific, quantum mechanics rears its ugly head again. As I’m sure most of the readers of this blog are aware, quantum mechanics does not give exact predictions; instead it gives a probability distribution of outcomes. Most of the time, this isn’t a problem as experiments are repeatable, and we can examine the distribution of the results. Still, there are some philosophical issues. We’re not going to last forever, so we can’t do a given experiment an infinite number of times. In any finite number of experiments, there is no guarantee that the resulting distribution will look like the one predicted by theory. So, can we ever say that anything is falsified?
I’ll leave that one for the philosophers of science because we’ve got an even bigger problem. In the previous situation, we could at least repeat out experiment. The theory of inflation, on the other hand, is a theory of the origin of the universe, and so far we’ve only been able to do that experiment once. What inflation tells us is that the fluctuations in the cosmic microwave background radiation are, in fact, quantum mechanical in nature. The rapid expansion of inflation takes quantum fluctuations and blows them up so that they are spread against the sky. One can predict the spectrum of those fluctuations from the theory and compare it to what we observe and find remarkable agreement. But the fluctuations are truly quantum mechanical in nature and, thus, we can only really speak of various probabilities. This uncertainty is called “cosmic variance” (perhaps you’ve heard of it?), and is more than just an academic question. While the agreement of theory with experiment is remarkably good in general, there are two data points which don’t fit the model very well. Are they the result of cosmic variance? Are they indicative of some aspect of the physics we don’t understand? How much would you bet on it?
At this point, my head starts to hurt, but even if we accept this Bayesian principle of mediocrity, we’re still not be out of the woods. There are ugly implementation details and confounding paradoxes that come out of this. These will be the subject of the final post in this series.
The posts in this series are:
The Multiverse: An Apology
The Lay of the Landscape
Twisty Little Universes, All Alike
Alone in the Multiverse