The next lab visit experiments I want to talk about are really the epitome of what I called the “NIST Paradigm” in an earlier post. These are experiments on “four-wave mixing” done by Colin McCormick (who I TA’d in freshman physics, back in the day), a post-doc in Paul Lett’s lab at NIST. As Paul said when I visited, if they had had a better idea of the field they were dabbling in, they would’ve thought that what they were trying was impossible; thanks to their relative ignorance, though, they just plowed ahead, and accomplished something pretty impressive.
The basic scheme is laid out in this arXiv preprint, which appears to be the same as this Optics Letter (I don’t have electronic access to Optics Letters, so I’m working off the arXiv text), and looks like this:
Clears everything right up, doesn’t it? Well, OK, maybe I can explain a little more…
The basic experimental procedure is that they send two lasers into a glass cell containing rubidium vapor. One of the lasers, called the “pump” is fairly intense (a few hundred milliwatts), while the other, called the “probe,” is quite weak (less than a milliwatt). The pump and probe have opposite linear polarizations, so they can be separated with a polarizing beamsplitter on the far side of the cell. This allows them to monitor the intensity of the probe beam without having their detector swamped by light from the pump.
Both lasers are tuned close to a transition in rubidium at 795 nm (one of the “D lines”), but not exactly on the transition. They fix the frequency of the pump at some value, and then scan the frequency of the probe, monitoring the transmitted intensity. That produces the graph seen at the bottom of the figure above, with four broad dips, and a couple of very narrow spikes.
There are three energy levels that matter in this system, and they’re typically drawn as in the upper right, in a configuration that looks a little like the Greek letter Λ (such three-level systems are often referred to as “lambda systems” for this reason). Two of the levels are hyperfine states in the ground state of rubidium (labeled “F=2” and “F=3”), the third is an excited state (which has hyperfine structure as well, but the splittings aren’t large enough to matter for the experiment, so it’s considered as a single state).
The four broad dips in the transmitted probe intensity correspond to absorption from each of the two ground states in the two different isotopes of rubidium. From left to right, these correspond to absorption by atoms starting in the F=3 state of rubidium-87, then the F=3 state of rubidium-85, then the F=2 state of rubidium-85, then the F=2 state of rubidium-87. This is bog standard laser spectroscopy, the sort of thing we do all the time with undergraduates.
The interesting thing is the upward-going spikes. These are places where there is actually more light coming out than was sent in– the probe is experiencing gain. This happens in a very narrow range of frequencies, when the probe and pump are tuned in just the right relation to one another.
The idea is shown in by the arrows in the upper right part of the figure above. At the positions of the spikes, neither laser is resonant with the usual transition, so they can’t excite atoms directly to the excited state, but they can do what’s called a “Raman transition,” using two photons to switch from one hyperfine state to the other. If you start with an atom in the F=2 state (at lower energy), it can “absorb” one photon from the pump laser (the longer of the solid arrows), and emit one photon into the probe laser through stimulated emission. “Absorb” is in scare quotes because it’s not really like normal absorption– there’s no point at which the atom is found in the excited state– in a sense, it’s really a single transition from F=2 to F=3, that just happens to use two laser photons instead of one microwave photon.
This already puts more light into the probe beam (essentially shifting one pump photon into the probe beam), but there’s another thing that can happen as well. An atom that’s in F=3 can make a transition from F=3 to F=2 by absorbing one pump photon, and emitting a photon at a new frequency (higher than the pump frequency), indicated by the dotted arrow in the diagram. This is the fourth “wave” that gives the “four-wave mixing” process its name.
In this picture, you can think of the four-wave mixing process as making a loop. You start with an atom in F=2, and three laser photons, two from the pump and one from the probe. The atom “absorbs” one photon from the pump, is stimulated to emit it into the probe, then “absorbs” another photon from the pump, and emits a photon of this new frequency, called the “conjugate.” Notice, though, that there is no light supplied at the conjugate frequency– it’s just created as the atom goes around the loop. This process doesn’t happen very often, of course, which is why you need a fair amount of intensity in the pump to make it show up– the process is highly non-linear. But when it works, you shine two lasers into a cell of rubidium vapor, and you get three beams coming out– pretty cool, no?
Energy needs to be conserved in this process, and if you squint at the arrows a bit, you can convince yourself that it works out– the conjugate photon is higher in energy than the pump by the same amount that the pump is higher than the probe, so the probe and the conjugate together have the same energy as the two pump photons that were lost. You also need to have momentum conserved in this process, which means that the conjugate photons are emitted in a tight beam, at a particular direction that is slightly different than either the pump or the probe (the angle between the conjugate and the pump is the same as that between the pump and the probe, but the conjugate is on the other side). This is great, because it means that the signature is unmistakable– you shine in two lasers, and when you hit the four-wave-mixing condition, a third beam appears as if by magic.
That’s all great, but the title is “Strong relative intensity squeezing by 4-wave mixing in Rb vapor.” So, what does “squeezing” have to do with this?
Well, “squeezing” in the context of quantum optics generally refers to the reduction of uncertainty in one of a pair of quantities related by an uncertainty principle. There’s an uncertainty relationship between the uncertainty in the number of photons in a laser beam and the phase of the wave associated with that laser, for example– the more you know about the number, the less you know about the phase.
This four-wave-mixing process leads to a reduction in the uncertainty of the relative intensity of the probe and conjugate beams. You can see this from thinking about the “loop” picture of the mixing process– you absorb a pump photon, emit a probe photon, absorb another pump photon, and emit a conjugate photon. For every conjugate photon produced, there’s also a probe photon produced, so the intensities of the two beams should be correlated– when you get a few more probe photons, you also get a few more conjugate photons, so the difference between the two intensities should remain constant.
The standard way of talking about this stuff in the quantum optics business is in terms of noise spectra. They take the conjugate and probe beams (after blocking the pump with a polarizing beamsplitter), and shine them onto two detectors, and then take the difference between the two intensities electronically. They feed the resulting signal into a spectrum analyzer, which is a device that measures how much of the signal is oscillating at various different frequencies. The output of the analyzer is a “noise spectrum,” showing the intensity of fluctuations as a function of frequency. These spectra are shown in Figure 2– curve “A” is the intrinsic noise in their detection system, and curve “B” is the difference between the conjugate and probe intensities. You can see that for frequencies below about 3MHz, curve “B” is below the “Standard Quantum Limit,” (the horizontal line at the level of curve “C”) which is the level of noise you would expect from normal quantum fluctuations in the beam– an extra photon here, a photon missing there. This demonstrates the squeezing effect of the four-wave mixing process.
In the normal language of squeezing experiments, what they see is an 8.1dB reduction in the noise at the lowest point of curve A. That means that the noise in that frequency range is about 15% of what you would normally expect, which is pretty respectable.
Wayyyyy back at the beginning of this, I quoted Paul Lett as saying that if they’d known more about the field, they never would’ve tried this. What was all that about? Well, it turns out that this scheme is something that people thought of quite some time ago, and never got it to work very well. The best value quoted in this paper is about a 0.2 dB reduction in noise, which is nothing. Most people in the squeezing field had given up on the idea as impractical, and had the NIST guys known the squeezing literature a little better, they probably wouldn’t’ve done the experiment.
Why did it work so much better? That’s not really clear to me, but it seems to be a function of the frequencies that they used. The previous experiments were done with the lasers tuned much closer to the atomic transitions, where the current work used lasers tuned much farther away. They also used higher intensity, and the polarizations used may also have played a role. There are just a ton of variables in this system, and it seems sort of like they got lucky with their particular choices of parameters.
Finally, why does anybody actually care about this stuff? Well, other than the fact that it’s just plain cool to be able to do this, there’s another way of looking at the squeezing results that makes the use more obvious. I already said it, in fact: every time you generate a conjugate photon, you also generate a probe photon. So, it’s not just that you have beams with small relative intensity fluctuations: you have correlated photon pairs. Every conjugate photon has a partner in the probe beam.
If you take these two beams, and shine them on two collections of rubidium atoms, you can start to do all sorts of fun quantum information things. Which is the whole reason the laser cooling group got into the four-wave-mixing business in the first place– they wanted a quick and easy way to generate pairs of photons to shine on rubidium BEC’s for quantum information experiments. The squeezing stuff is just a bonus.
So, score another one for the NIST paradigm.
McCormick, C.F., Boyer, V., Arimondo, E., Lett, P.D. (2007). Strong relative intensity squeezing by four-wave mixing in rubidium vapor. Optics Letters, 32(2), 178-180.
Cool — and Colin was my TA when I was a freshman!
Chad, thanks for blogging about our work! You’ve done a much better job explaining it than me. In the future I’m going to be sending curious people here for answers, rather than trying to stumble through it myself.
You’re absolutely right that this was an example of the NIST paradigm, as you call it. When we started working on this project, we didn’t know enough to know that it was “impossible”. We’ve learned a lot since then, and that was an important part of the process — using the NIST method doesn’t mean that you ignore the literature on previous work! In fact, that helped us figure out how to improve the results to something like 8 dB of observed squeezing and 11 dB of extrapolated squeezing. But the bottom line is that if we knew then what we know now about past work, we probably wouldn’t have tried this experiment in the first place.
As for why this technique works so much better than previous efforts, there are two key factors. The first is that most previous efforts to make squeezed light with four-wave mixing in atoms used just two levels of the atoms. All the beams had the same frequency, and the mixing that transfered light from one beam into the other was accompanied by an inevitable amount of spontaneous emission, photons with the same frequency as the pump and probe, but no correlation with a partner, and a random polarization. These photons created a lot of background light that couldn’t be blocked with a polarizing beam splitter, and overwhelmed the squeezing effect. Our method of using a lambda system really reduces that spontaneous emission, and gives enough four-wave mixing to generate good squeezing.
The second reason is more prosaic. Squeezing is incredibly sensitive to optical loss. If you put squeezed light through a beamsplitter, the light that gets through is less squeezed, by an amount corresponding to the fractional reflection. Every imperfect reflecting surface is like a weak beamsplitter, and reduces squeezing. We worked very hard to have good optical transparency, and a detector with a high enough detection efficiency that it was sensitive to large amounts of squeezing.
This method has been leading to some fun experiments in quantum imaging and other applications, and I hope you get back to NIST soon to see them all!