Bunches and Antibunches of Atoms: Hanbury Brown and Twiss Effects in Ultracold Atoms

ResearchBlogging.orgTwo papers in one post this time out. One of these was brought to my attention by Joerg Heber, the other I was reminded of when checking some information for last week’s mathematical post on photons. They fit extremely well together though, and both relate to the photon correlation stuff I was talking about last week.

i-c5d04e63138945a2d90eecc10b7a1de0-HBT_He.pngOK, what’s the deal with these? These are two papers, one recent Optics Express paper from a week or so ago, the other a Nature article from a few years back. The Nature paper includes the graph you see at right, which is a really nice dataset demonstrating the Hanbury Brown and Twiss bunching effect in bosonic helium-4 (top), and the analogous anti-bunching effect in fermionic helium-3 (bottom).

Very pretty. Who are Hanbury, Brown, and Twiss, and why should I care? There are only two of them– Hanbury Brown has a double unhyphenated last name, so its really the (Hanbury Brown) and Twiss effect. Hanbury Brown and Twiss were a couple of British astronomers working on a way to make an interferometer to measure the size of nearby stars. They were looking at intensity correlations between the signals from two telescopes looking at the same star, and using that information to measure its angular size.

As a sort of demonstration, they looked at the signal from a bench-top light source, and showed that the light signal showed bunching– that is, they were more likely to detect a second bit of light very shortly after the first bit than they were to get more light at longer times. While this is easily explained and in fact inevitable in a wave model of light (as I explained last week), it created a bit of a stir among people in the then-new field of quantum optics (Hanbury Brown and Twiss did their experiment in the 1950’s), as this didn’t seem like something you should get from a photon model of light. It took a little while to sort out, but the ultimate explanation is really very simple.

And that is? Photons are bosons.

Photons, like protons and electrons and nearly all other fundamental particles, have intrinsic angular moment called “spin.” Unlike electron spin, though, photons have one unit of angular momentum, not just half a unit. This has dramatic consequences for the symmetry of the wavefunctions used to describe the system– where fermionic particles like electrons are forbidden to be in the same place at the same time, bosonic particles like photons will happily cluster together in a single quantum state.

One of the consequences of this is that bosons will tend to bunch up in a beam, so that the probability of detecting a second one right after the first is higher than the probability of the second one coming much later. That’s exactly what Hanbury Brown and Twiss saw, and what everybody else to look at correlation functions for photons has seen.

So, if they figured this out in the 50’s, why are people writing about it now? People figured this out for photons back in the 1960’s, but it hasn’t been possible to do it with atoms until more recently.

Wait, atoms and photons aren’t remotely similar. Atoms are made up of protons and neutrons and electrons, which are all fermions. True, but if you stick an even number of fermions together, each with half a unit of angular momentum, you can make a composite boson, with an integer number of units of angular momentum. That composite object will show the Hanbury Brown and Twiss effect, as long as the energies involved are small. If you take an odd number of protons, neutrons, and electrons and stick them together, the resulting atom is a composite fermion, and will show the effects of Pauli exclusion– that is, you will never see two atoms in the same state at the same time, leading to anti-bunching.

So that’s what these papers are about? Doing the same thing with atoms that people did with light a long time ago? Yep. The Nature paper is looking at the correlation behavior of two different isotopes of helium, the bosonic isotope helium-4 and the fermionic isotope helium-3. They took laser-cooled clouds of one isotope or the other, and dropped them onto a position-sensitive detector, then looked at how many pairs of atoms arrived together.

How did they do that? Helium, like all the other noble gases, can’t be cooled in its ground state with current laser technology, but has a metastable state that lives essentially forever, and can be used as the ground state for a laser cooling experiment. Each one of these metastable atoms carries a huge amount of energy– more than 10 eV– which is enough to knock an electron loose from anything they hit. If you drop metastable helium atoms on a microchannel plate detector, many of the He atoms will knock electrons loose. These electrons can then be amplified into a measurable current pulse, and the pulses can be counted electronically.

You can also direct some of the electrons from the MCP onto a phosphor screen, where they’ll show up as little flashes of light. This gives you the full spatial distribution of the atoms as they hit the plate– the horizontal dimensions from the position of the spots on the phosphor, and the vertical from the distribution of arrival times.

This lets them measure correlations? Yes. They look at how often they saw two atoms arriving within some distance of one another, and used that measurement to determine the correlation function shown in the graphs above. When they did the experiment with helium-4, they found an increased probability of detecting two atoms in rapid succession, and when they did it with helium-3, they saw a decreased probability of detecting two atoms in rapid succession, just as you would expect.

(This was not, by the way, the first such measurement– a Japanese group demonstrated the Hanbury Brown and Twiss effect in 1996 with metastable neon. The Nature paper is the first to show the anti-bunching effect, though, and has a cleaner signal with the bosons, thanks to (among other things) the fact that they used a much colder sample.)

The correlation function only changes by, what, 5%, though. That’s not that big. Yeah, but this is a really difficult experiment to do– everything has to work just right, and you need to do a whole lot of data taking to pick out the signal. In an ideal world, the bunched signal would go all the way up to 2, and the anti-bunched signal would go down to 0, but we don’t live in an ideal world, so any clear effect at all is pretty darn cool.

OK, so, if people did these experiments in 1996 and 2007, why are they still writing it up in 2010? The second paper, from Optics Express looks at a different twist on the same experiment. They work with just the bosonic isotope, and compare what happens with a normal cloud of very cold atoms (top graph in the figure below) to what happens when they start with a BEC (bottom graph):

i-b0b2b559327e0e6d52cb5dc444b13b5c-HBT_He_BEC.png

The BEC graph is just a flat line. Bor-ing. Actually, that’s the exciting result. You can see the same thing with photons, if you compare light from a lamp to light from a laser. The laser gives the same sort of flat correlation function, because the laser is a source of coherent light, which means that the intensity fluctuations are as small as they can possibly be. A laser behaves like a classical light wave to a very good approximation, and one signature of that behavior is a flat correlation function.

(In the photon picture, a laser is represented by a “coherent state” which has the nifty property that subtracting one photon from it doesn’t change the state. Such a coherent state is the best approximation of a classical EM wave that you can make.)

So, a BEC is a laser? It’s analogous to a laser, for atoms. It’s not an analogy that can be pushed too far– the real reason lasers are technologically useful is that they are really, really bright. The typical “atom laser” has only millions of atoms, which is about a nanosecond’s worth of light from a typical laser pointer, so the technological potential isn’t anywhere near as great.

BEC’s do share the coherence properties of laser, though, including the flat correlation function. And that’s what this new experiment shows: they looked at atoms dropped from a cloud just above the BEC transition temperature and saw the usual Hanbury Brown and Twiss bunching, then compared that to atoms dropped from a cloud just below the BEC transition, and saw a flat correlation function.

It’s a cute demo, I guess, but is it good for anything? Well, it tells you that atoms coupled out of their BEC share the coherence properties of the whole thing, which is worth knowing. The technique could potentially be used to test the behavior of various “atom laser” schemes.

It’s not as sexy as the other paper, though. Which explains why it’s in Optics Express rather than Nature.

So, anyway, that’s what’s up with correlating atoms. Any more questions?

Jeltes, T., McNamara, J., Hogervorst, W., Vassen, W., Krachmalnicoff, V., Schellekens, M., Perrin, A., Chang, H., Boiron, D., Aspect, A., & Westbrook, C. (2007). Comparison of the Hanbury Brown-Twiss effect for bosons and fermions Nature, 445 (7126), 402-405 DOI: 10.1038/nature05513

Manning, A., Hodgman, S., Dall, R., Johnsson, M., & Truscott, A. (2010). The Hanbury Brown-Twiss effect in a pulsed atom laser Optics Express, 18 (18) DOI: 10.1364/OE.18.018712

4 comments

  1. I’m a bit confused about the x-axes of these plots: On the first plot, we have correlation versus separation (between what?), but the later plots show correlation versus time.

    Even if we disregard this discrepancy, I’m not sure why the non-BEC atoms become correlated only after a certain amount of time (or separation). Are the atoms condensing into a BEC over the course of the measurements?

  2. I’m a bit confused about the x-axes of these plots: On the first plot, we have correlation versus separation (between what?), but the later plots show correlation versus time.

    They’re both ways of measuring the separation between atoms. The time measurement is the amount of time that passes between the first and second atom detected in some region of the detector, the distance measurement is the distance between the first and second atoms when they strike the detector. For the graph shown, they’re looking at the vertical distance, so this is really the speed at which the atoms strike the detector (which is a known quantity, as they’re dropped from rest a known distance above the detector) multiplied by the difference in arrival times.

    The atoms that are in a BEC start out in a BEC before they’re dropped. The correlation between non-BEC atoms actually disappears after some time– the higher the plotted quantity, the higher the probability of finding a second atom at that distance (in space or time) from the first atoms detected.

  3. Very interesting article. I have one question concerning this part: “In an ideal world, the bunched signal would go all the way up to 2, and the anti-bunched signal would go down to 0, but we don’t live in an ideal world, so any clear effect at all is pretty darn cool.”

    Do composite objects really have clear fermionic or bosonic character? I would think it’s just an approximation in this case.

    For example helium 4 may mostly behave like a boson but it’s still composed of fermions which cannot be in the exact same location and quantum state so two helium 4 atoms cannot occupy the exact same location and quantum state either.

    So this would mean that as the distance approaches zero the correlation between composite bosons should eventually start going down to zero. Of course this would only happen once the separation is comparable to atomic scale and not for millimeters as in the plot.

  4. Do composite objects really have clear fermionic or bosonic character? I would think it’s just an approximation in this case.

    It’s an approximation, but a very good one. In order for one helium atom to “see” that the neighboring atoms were really composite particles, you would want them to be separated by something close to the size of the atoms themselves, which is something like a tenth of a nanometer. The atoms in these experiments are at densities of a few times 10^18 per cubic meter, which corresponds to a separation of hundreds of nm. They’re nowhere near tightly packed enough for this to be a problem.

Comments are closed.