Buried beneath some unseemly but justified squee-ing, Scalzi links to an article about “counterfactal computation”, an experiment in which the group of Paul Kwiat group at Illinois managed to find the results of a quantum computation without running the computer at all. Really, there’s not much to say to that other than “Whoa.”
The article describing the experiment is slated to be published in Nature, so I don’t have access to it yet, but I’ll try to put together an explanation when I get a copy. The experiment involves a phenomenon know as the “Quantum Zeno Effect,” though, which deserves a post of its own, below the fold.
The “Quantum Zeno Effect” takes its name from Zeno’s paradoxes, which argue that motion ought to be impossible, because to cover any given distance requires you to first move half the distance, then half the remaining distance, then half again, and so on ad infinitum. Each of those distances should require a finite time to move, and there are infinitely many steps, so you should never get anywhere.
The paradox fails, of course, but in the quantum world, it can be made surprisingly real, thanks to the nature of quantum measurement. Consider the case of an atom placed into an excited state with a finite lifetime. After some period of time, say one second, there is a 50% probability that the atom has spontaneously decayed to the ground state.
If you do a measurement that determines the state of the atom, you have a 50% chance of finding it in the excited state, and a 50% chance of finding it in the ground state. “Big deal,” you say, but here’s the key: after you make that measurement, the atom is 100% in whichever state you measured. A second measurement a short time later is guaranteed to find the same result as the first.
So, imagine a different experiment– rather than waiting until the results are 50/50, make the measurement a much shorter time after the excitation– a tenth of a second, say. The probability that the atom has already decayed is really, really small– 0.002%– so you’re really likely to find it in the excited state, after which the atom is entirely in the excited state again, and the decay clock starts over. Then mesaure it again, and again, and again, waiting a tenth of a second each time. After ten measurements, you’re one second past the original excitation, and the probability of finding the particle in the excited state is almost 100% (99.98%, give or take). If you keep making measurements at short intervals, you can keep the atom in the excited state basically forever.
The cool thing is, you can do this sort of thing with passive measurements. You don’t have to bounce a photon off the atom to prove that it’s in the excited state– instead, you can send in a photon that will only be absorbed by a ground-state atom, and see what happens. If it isn’t absorbed (and it most likely won’t be), that’s just as effective at keeping the atom in the excited state as if you’d done something more active to detect the excited-state atom.
If you’re really clever about it (and Paul Kwiat is a really clever guy), you can use this basic idea to make lots of different measurements. On their web site, there’s a very nice explanation (there are also some nifty little movies of the interferometer technique they use) of a way to use the quantum Zeno effect to detect the presence of a photo-sensitive bomb without hitting it with a photon and blowing yourself up. And really, if you can do that, computing without running a computer should come as no surprise…
“By placing our photon in a quantum superposition of running and not running the search algorithm…”
That’s so brilliant, I may cry.
Off topic but I looked at the forums at physorg.com where that article was, and wow what a mishmash of gobbledegook answers. Many of the quantum physics answers made no sense and as far as I could tell bear almost no relationship to the standard model (not that I am an expert, but I try to very slightly keep up, couldn’t calculate the propagation of a photon anymore but…). I don’t know about this site but if this is what interested people get as answers when they go there, I can see problems.
I’m not sure why an event with a 50/50 chance of happening in half a second would have only a 0.2% chance of happening in a tenth of a second? Shouldn’t it be more like 6.7%?
John Dowling’s summary at Nature is decent if you can get there. But then again, if you can get there, you might as well read the actual article.
I think I chose the wrong field. Physics sounds way more spiffy….
I’m not sure why an event with a 50/50 chance of happening in half a second would have only a 0.2% chance of happening in a tenth of a second? Shouldn’t it be more like 6.7%?
Because I can’t do algebra. I can’t even really handle percentages.
A somewhat more serious answer: I changed my mind about what I wanted to use in terms of concrete numbers, but I had already put stuff into the calculator, and didn’t change all the numbers.
This is what I get for trying to do quick blog posts on serious subjects at 10 pm when my stomach is all fucked up. I’ll re-edit it later so the numbers make sense. First, I need to go run a lab that I’m 90% sure is going to be a disaster…
Too true
Maybe if you just repeatedly observe the class, they will never actually collapse into the debacle state. Of course, they will probably also never get anything done….
My head hurts. Not just from the implications of the result, but from the use of logic behind it. It’s completely sound and intuitive but adds up to something completely counterintuitive.
Wowsers.
I have a BA in physics, but reading how they did this caused me to make this face
http://cristal.inria.fr/~harley/ecdl3/pics/butthead.gif
“Uh….what?”
OK.. I understand the effect, but your example leaves me a little confused how to resolve things.
Consider a classical nuclear physics experiment. I have a radioactive gamma source, which is surrouned by 4pi, 100% efficient gamma detectors. When an atom in the source decays, I detect it. When an atom in the source does NOT decay, I know that too.
Question 1: What happens when the source is a single (excited) atom? Can it ever decay? I am constantly making the measurement that it is in the excited state!
Question 2: Is this different from an ensemble of excited atoms?
The easy way out seems to be (1) no, and (2) yes: the ensemble of atoms is such that the total number of excited atoms is known, but they are all ‘schroedinger’s cats’: we don’t know which is which. Thus the decay curve is given by the ensemble.
But this seems wrong to me… we do individual-particle work all the time in HEP.
Can anyone explain?
—Nathaniel
Nathaniel:
I think your instinct is right: your thought experiment is wrong. 🙂
Beware of experiments that are designed around detectors with 100% efficiency! There is no such thing – and I’m not merely quibbling; this is part of the entire point of quantum mechanics. If such detectors existed, you would be correct, and your atom would never be able to decay. However, in the real universe, experiments have noise: because the detector is not at absolute zero temperature, a fluctuation eventually sets it off; or cosmic rays come along and set off the detector; or the atom drifts out of focus and gets lost somewhere; or the power goes out; or — oops — there was never really an excited atom in there, because the procedure that you used to bring that excited atom into your apparatus is also subject to noise; or the computer glitches and records an event that didn’t actually occur; or (paging Philip K. Dick) your *mind* glitches and causes you to *believe* that you saw the decay happen, but are you sure it really did?
The rest of the universe is out there. It’s coupled to your experiment, and you can’t shut it out forever. That’s my personal interpretation of the phenomenon called “decoherence”, which says (more or less) that Schroedinger’s cat is in a superposition of the alive-state and the dead-state, but not for very long, because sooner or later the box will either meow or start to smell, whereupon the secret will be out and the universe will be committed to one state or the other. “Sooner or later” is generally measured in fractions of a second, because the universe is a very efficient spy, and it has many ways of finding out what’s going on in the box.
The correct question to ask is: if adding detectors to the gamma-decay experiment can’t prevent the decay altogether, can they make it take longer? The answer is no, because the nucleus of your atom doesn’t notice the presence of those detectors several feet away. The decay rate of the atom is controlled by pedantic factors like “what forces act between the particles in the nucleus, and how do they depend on distance?” The detectors contribute nothing to this problem, for they are so far away that their gravitational and electromagnetic pull is pretty darned insignificant.
The Quantum Zeno Effect is different. The excited atom is not merely being “watched” — it is being probed with photons, which come close enough to interact with it. (If they didn’t, they wouldn’t tell us anything or have any effect whatsoever.) Then those photons periodically interact with the rest of the universe — well, with the detector and the experimenter, anyway, but the difference is moot: a few extra zeros at the end of an inconceivably large number. Such an experiment can no longer be approximated as “a scientist watches an atom, waiting for decay”; it’s now “a scientist periodically observes a system, which consists of an atom and a bunch of interacting photons, and looks for decay”. To put this more poetically: the experimenter is not watching the atom from several feet away; rather, he or she is constantly touching the atom with a beam of photons and feeling for the moment when it decays.
This is a profoundly nifty experiment, because as you gradually add more and more significant interactions between the atom and a much larger system (the photons, and through them, the rest of the universe), you begin to see the gradual emergence of classical behavior. Classical mechanics is what quantum mechanics looks like when there are so many interacting particles that all the really weird states (e.g. the states in which the cat is in a superposition of the meowing-state and the sleeping-state) are so statistically unlikely that they are never seen; all we ever see is a superposition of the uncountably-vast number of really boring, essentially indistinguishable states in which the cat is only doing one thing at a time, obeying Newton’s laws like a good little citizen.
According to classical mechanics, excited atoms shouldn’t ever emit a photon: to do so would require a miraculous “quantum leap”. Because the Quantum Zeno experiment is not 100% efficient (see above) we can’t reach such a pseudo-classical situation, in which the excited state would last forever. But we can get a lot closer.
(That’s about as far out on the limb as I dare to crawl without actually reading the Nature paper… 🙂
Nathaniel: Consider a classical nuclear physics experiment. I have a radioactive gamma source, which is surrouned by 4pi, 100% efficient gamma detectors. When an atom in the source decays, I detect it. When an atom in the source does NOT decay, I know that too.
Question 1: What happens when the source is a single (excited) atom? Can it ever decay? I am constantly making the measurement that it is in the excited state!
The measurements that people make for real quantum Zeno experiments are more active than that– they scatter photons off the atom (or not), which is a more direct probe of the state.
My instinct is that the perfect-detector experiment isn’t “active” enough, but I’m not sure how to really quanitfy that. I don’t think simple distance from the atom is enough, as people have done experiments where they demonstrate a measurable increase in an atomic lifetime by putting an atom in a cavity with the length chosen so that the emission wavelength is not a standing-wave mode of the cavity. You can argue that this is changing the mode spectrum of the vacuum state at the position of the atom, which the perfect detector case doesn’t do, but I’m not sure.
Note that negative or “Renninger” measurements can affect the wave function, since the wave readjusts to reflect not having been detected at a certain point. That means (as per quantum “seeing in the dark”) that putting an obstacle in the path of a split-beam interferometer will change the interference pattern (blocking one path for example allows hits on a detector that otherwise would be in the interference “dark” zone.)
In 2000 I put out a quantum measurement paradox on refereed discussion group sci.physics.research that (now improved) postulated the following: what if we sent a polarized photon many times through half-wave plates. The degree of accumulated angular momentum would show how circularly polarized the photon wave was (as in fully circular, elliptical, linear etc. – based on relative proportions of RH and LH basis states) and not just answers to yes/no tests as usually believed possible. (This is what Y. Aharanov et al call a “weak measurement.”)
It got enough comment and play that now (last I checked) typing “quantum measurement paradox” into Google brings it up first.
Hi,
Im not normally a Blogger (I also dont have a cell phone, if you can believe that).
However, given the plethora of commentary on our article (and on articles *about* our article), I thought a few words might be useful. Ill try to keep it short [and fail L ]
1. There has been quite a bit of confusion over what weve actually done (due in large part to reporters that wont let us read their copy before it goes to print), not to mention *how* we did it. For the record—
a. we experimentally implemented a proposal made several years ago, showing how one could sometimes get information about the answer to a quantum computation without the computer running. Specifically, we set up an optical implementation of Grovers search algorithm, and showed how, ~25% of the time, we could *exclude* one of the answers.
Some further remarks:
– our implementation of the article is not scalable, which means that although we could pretty easily search a database with 5 or 6 elements, one with 100 elements would be unmanageable.
– however, the techniques we discuss could equally well be applied to approaches that *are* scalable.
b. by using the Q. Zeno effect, you can push the success probability to 100%, i.e., you can exclude one of the elements as the answer. However, if the element you are trying to exclude actually *is* the answer, then the computer *does* run.
-The Q. Zeno effect itself is quite interesting. If you want to know more about it, you can check out this tutorial:
http://www.physics.uiuc.edu/People/Faculty/profiles/Kwiat/Interaction-Free-Measurements.htm
There’s also a puppy-friendly explanation here:
http://cosmicvariance.com/2006/02/27/quantum-interrogation/
c. unless you get really lucky, and the actual answer is the last one (i.e., youve excluded all the others without the computer running, and so you know the right answer is the last element, without the computer running), the technique in b doesnt really help too much, since if you happen to check if the answer wasnt a particular database element, and it really was, then the computer does run.
d. By putting the Zeno effect inside of another Zeno effect, you can work it so that even if you are looking to exclude a particular database element, and the answer *is* that element, then the computer doesnt run (but you still get the answer). Therefore, you can now check each of the elements one by one, to find the answer without the computer running. This was the first main theoretical point of the paper. Contrary to some popular press descriptions, we did not implement this experimentally (nor do we intend to, as its likely to be inconveniently difficult).
e. If you had to use the method of d. to check each database element one-by-one, then youd lose the entire speedup advantage of a quantum computer. Therefore, we also proposed a scheme whereby the right answer can be determined bit-by-bit (i.e., whats the value of the first bit, whats the value of the second bit, etc.). This is obviously much faster, and recovers the quantum speedup advantage.
f. Finally, we proposed a slightly modified scheme that also seems to have some resilience to errors.
Taken in its present form, the methods are too cumbersome to be much good for quantum error correction. However, it is our hope this article will stimulate some very bright theorists to see if some of the underlying concepts can be used to improve current ideas about quantum error correction.
2. There have been a number of questions and criticisms, either about the article, or the articles about the article. Here are my thoughts on that:
a. I guess I should disagree that our article is poorly written (no big surprise there;-), though I agree 10000% that it is not at all easy to read. There are (at least) two reasons for this:
– there is a tight length constraint for Nature, and so many more detailed explanations had to be severely shortened, put in the supplementary information, or left out entirely. Even so, the paper took over a year just to write, so that at least it was accurate, and contained all the essential information. For example, we were not allowed to include any kind of detailed explanation of how Grovers algorithm works. [If you want more info on this, feel free to check out: P. G. Kwiat, J. R. Mitchell, P. D. D. Schwindt, and A. G. White, “Grover’s search algorithm: An optical approach”, J. Mod. Opt. 47, 257 (2000)., which is available here:
http://www.physics.uiuc.edu/Research/QI/Photonics/publications.html
– the concepts themselves are, in my opinion, not easy to explain. The basic scheme that we experimentally implemented is easy enough. And even the Zeno effect is not so bad (see that above tutorial). But once it becomes chained, the description just gets hard. (I am pointing this out, because I would reserve the criticism poorly written for something which *could* be easily [and correctly!] explained, but wasnt.)
b. I agree that some of the popular press descriptions left something to be desired, and often used very misleading wording (e.g., quantum computer answers question before its asked nonsense!). Having said this, I do have rather great empathy for the writers the phenomenon is not trivial for people in the field to understand; how should the writers (who *arent* in the field) explain it to readers who also arent in the field. Overall, the coverage was not too far off the mark.
-To my mind, the most accurate description was in an article in Fridays Chicago Tribune (the author kindly let us review his text for accuracy before going to print).
Okay, thanks for your attention if you made it this far.
I hope that these words (and the above web link) will clarify some of the issues in the paper.
Best wishes,
Paul Kwiat
PS Please feel free to post this response (in its entirely though) on any other relevant Blogs. Thanks.
Okay, the thing that is confusing about the probability calculations per unit time in the context of the quantum zeno effect is easy to explain.
Physicists are used to assuming exponential rules for decay times. Forgive me for using smaller numbers to avoid non linearities. In such a case, if the probability were 5% in 1 second, it would be a tiny bit larger than 0.5% in 1/10 of a second, etc. Probability of decay is approximately proportional to time for times short compared to the mean lifetime.
But in quantum mechanics, one computes a probability by first calculating a complex number (say a sum of Feynman diagrams), and then taking the squared magnitude of that complex number. So a probability cannot be exactly proportional to a time period. It must be proportional to the square of a time period.
So the resulting Zeno paradox is that (under certain very restrictive conditions) if the probability is 5% in 1 second, then it will be not about 0.5% in 1/10 of a second, but instead will follow the square of the time interval and will be about 0.05% in 1/10 of a second.
By the way, there is also a reverse quantum Zeno effect. Quite some time ago, when I was just getting back into doing physics, I decided to write a book on the paradoxes of physics and the quantum Zeno effect was to be a chapter. I have a short write-up or two using that effect to “explain” the MOND effect, and if you search for “brannen” and “zeno” you will probably find it quickly. One of those goes through the squared magnitude calculations.
It’s actually worse than that – the Quantum Zeno Effect doesn’t work for a purely exponential decay. If you do the math out, you set exactly the same probability as with no measurements. It does work for atomic decays, but only because there’s a non. exponential part at short times.
I found a couple of papers on this, only after spending an hour or two messing up the calculation.
So, I dunno if this is a dumb question, but I remain eternally perplexed by these “non-interaction interactions” in quantum physics (like the ground-state-interacting-only atom which “resets” the excitation-decay clock by failing to interact with the excited atom, thus “measuring” whether it is excited or not):
It seems unusual to me that this effect on the excited atom is done without any apparent energy expenditure by the “test” particle. It intuitively seems that keeping an atom excited rather than returning to the ground state ought to be something that requires energy to do. But here a particle keeps an atom excited by not interacting with it. What am I missing here? Is there in fact no entropy loss or energy cost associated with keeping the excited atom excited? Or does the “test particle” incur some kind of energy cost for its “measurement” after all?
Also, could something like this be used in a GUT to explain away the failure of the proton to decay? Or is proton decay exponential or something 😛
Coin: Sudarshan’s own web site (see it for more details and citations) puts it this way.
http://www.ph.utexas.edu/fogs/sudarshan_zeno.html
In a closed quantum system an “unstable particle” which seems to decay, is a metastable state which evolves in a probability conserving manner; some of the probability goes with the fraction that has “decayed.” A systematic analysis of the decay amplitude in terms of the spectral density of the decaying state determines the evolution.
Classical law of radioactivity has a strict exponential decay, but in quantum theory this is only approximately so; for very short times the decay probability increases as the square of the time interval; so a metastable quantum system which is frequently observed (and reset) would remain essentially unchanged. This is the Quantum Zeno effect discovered by Misra and Sudarshan (J. Math. Phys.).
Chiu and Sudarshan studied the rigorous theory of decay and exhibited the short time Zeno regime, and the long term Khalfin regime in addition to the exponential regime. With Gorini and Chiu, Sudarshan developed a formalism of analytic continuation of the vector space of quantum mechanics; he developed it fully into Quantum Mechanics in Dual Spaces.
Valanju, Chiu, and Sudarshan analyzed the data on multiple meson production in hadronic collisions with complex nuclei in cosmic rays and found the first evidence in published data for the Zeno effect manifested in the successive encounters of incoming cosmic rays with successive nucleons.
A laboratory test using atomic physics was carried out by Itano, Heinzen, and collaborators in NIST, Colorado, who “interrogated” the atom to see if it has decayed by shining light of a different frequency which would be excited only if the atom decayed. A follow up test was carried out by Raizen and collaborators in UT-Austin. All these verified the Quantum Zeno Effect.
The study on the Quantum Zeno Effect in a system with constraints was carried out by Sudarshan, Pascazio, and collaborators. They were able to show that the continuous action of the constraints is to restrict the motion of the domain of phase space allowed.