One of the things I’d like to accomplish with the current series of posts is to give a little insight into what it’s like to do science. This should probably seem familiar to those readers who are experimental scientists, but might be new to those who aren’t. I think that this is one of the most useful things that science blogs can do– to help make clear that science is a human activity like anything else, with its ups and downs, good days and bad.
To that end, I’m going to follow the detailed technical explanation of each of these papers with a post relating whatever anecdotes I can think of about how we did the experiments and wrote the papers (and particularly what my role was). This will be a sort of mellower version of the “True Lab Stories” series (I haven’t posted any of those for a while, either…)– there’s a “Eureka!” moment or two in there, and also some hard slogging, and a few things that will only be amusing to great big nerds, but I hope that it will be interesting reading even if you don’t feel like wading through the technical details.
Think of these as the DVD extras for the ResearchBlogging edition of my thesis…
For this particular paper, “Optical Control of Ultracold Collisions in Metastable Xenon,” the story that always comes to mind is how we ended up with that very nice graph that’s at the center of everything. Unlike most “typical data” graphs, that was taken on the very first day that we got the experiment working.
This experiment was done in the spring and the summer between my first and second years of grad school. I had spent the previous summer working in the Phillips group, and was clearly going to join up with them for my thesis, and Matt Walhout (now at Calvin College) was graduating, so it was natural for me to replace him as the grad student in the group. I joined up with the xenon project just as they were starting to do collision experiments.
The first day we were doing this experiment, we spent a bunch of time futzing around with logic gates and so on to get the pulse sequence down right, and then hooked up the charged particle counters, and started looking for a signal. The way this worked was that Matt and I sat in the lab looking at the digital readouts on the counters, and wrote down the average values we were seeing– we’d watch them cycle through a few times (they took maybe 5-10 seconds to get each point), and sort of round off to whatever number of significant figures seemed reasonable (I usually argued for one more digit than he did, and I lost all of those). We were tuning the laser frequency by hand, using a variable resistor and a couple of nine-volt batteries to produce a control signal, and watching a spectrum analyzer to make sure that the laser frequency didn’t drift too far.
The way I remember it, we started at a large negative detuning, and moved closer to resonance, watching the signal we were looking for get bigger along the way. Just before we hit the resonance, it went crashing down, and then the effect of the laser switched from increasing the collision rate to decreasing the collision rate, right where it was supposed to. We traced the whole thing out for xenon-132, then repeated it for xenon-136, and were ecstatic to see it repeated.
Then we spent a couple of days automating the data collection, and proceeded to take reams of data, almost literally. We eventually had a three-ring binder full of little graphs very much like the ones in the previous post, exploring all sorts of different parameters– laser intensity, trap number, different pulse sequences, and so on. When it came time to write the paper, though, the nicest looking graphs we had were the ones we got the first day, by hand.
The reason for this was that there was a little bit of human judgment applied to the by-hand data collection. When the control laser was tuned very close to the resonance frequency, it distorted the trap to a point where we didn’t really trust that we’d get reasonable numbers, so we skipped those points. When we did the automated collection, though (using a LabView program), the computer couldn’t identify those points, and just recorded garbage. Pretty much every one of the automated plots had at least a couple of points that were just wacky. In the end, we went with that first data set as the best of the lot.
We did have to come up with a way to indicate the distorted regions for the automated collection– if you look at the second figure in the previous post, you’ll notice a shaded region where we didn’t trust the data. This was done in a very ad hoc way– the LabView program stepped through the frequencies over a couple of minutes, and recorded four channels of data: the number of ions produced with the control laser on, the number produced with the control laser off, a frequency reference signal, and a data tag that we made using the same nine-volt-battery-and-resistor gadget we used for the original measurement. One of us would sit by the computer during the run, and when the trap started to look odd, we would twist the variable resistor, changing the voltage. When it went back to normal, we’d twist it the other way, and that provided two markers for the data analysis.
My main contribution to the paper writing was data processing– the other authors (Matt, Uwe Sterr, and Maarten Hoogerland) took most of the data, and they would give me the files on floppy disks. I sat at a computer in the break room, and turned the raw files into useful graphs. I converted the frequency reference into actual frequency (counting peaks from a Fabry-Perot interferometer to figure out an average frequency conversion for each file), taking the ratios needed to find the suppression and enhancement signals, marking the suspect data. I didn’t have a complete grasp of all the physics at the time, so the decisions about what data to take were mostly made by the other guys, but I made most of the judgment calls about how to present the data, and which files to throw away.
(This time period is also when my Usenet addiction really took hold, as I had a lot of free time at the computer…)
This was my first experience with the “paper torture” process, and I remember being blown away by the level of detail of the comments being made. It seemed like every word in the paper was scrutinised, criticised, and changed, and that includes “a” and “the.” It was a couple of years before I learned how lucky I was to only have Steve Rolston as a PI– he’s relatively laid back about paper torture compared to Bill Phillips.
The other thing that jumps out at me, looking back at the paper, is that the absolute collision rate measurement we did is described oddly. In the paper, it’s introduced as “a preliminary experiment,” and gets one paragraph. I remember it as one of the last things we did, not a “preliminary,” but that may just be because it was an enormous pain in the ass to do the analysis. We needed to vary the number and density of the atoms in our sample to extract the collision rate, and we could never get as wide a range as we wanted– when the density got too high, the images we used to measure it got all distorted, and the numbers were useless. When the number got too low, we could barely see the trap at all, and the numbers were useless. I repeated the same basic measurement about three years later, and didn’t do any better.
I’m not sure it was actually one of the last things we did, but I’m pretty sure it was one of the last numbers to get nailed down. Which makes it kind of amusing to see the whole miserable process glossed over as “a preliminary measurement.”
(It’s a justifiable statement, though, in that it’s one of the least significant results. The things that made it difficult were just technical issues, not anything interesting, so it really doesn’t rate more than one short paragraph, even though it was a major hassle.)
This paper gave me a kind of skewed view of what the research life is like. There I was, a wet-behind-the-ears grad student who had yet to even take the qualifiers, and the first experiment I was on just… worked. Bang, zoom, PRL, in a few months. “Gee, this is easy and fun,” I thought.
The next couple of years kind of beat that out of me. 1996 was particularly bad– an entire year in which every goddamn thing in the lab broke at one point or another, and I made no discernible progress on anything. Things turned around dramatically in 1997, though, leading to the next paper I’ll talk about…