{"id":9446,"date":"2014-06-16T10:12:17","date_gmt":"2014-06-16T14:12:17","guid":{"rendered":"http:\/\/scienceblogs.com\/principles\/?p=9446"},"modified":"2014-06-16T10:12:17","modified_gmt":"2014-06-16T14:12:17","slug":"on-black-magic-in-physics","status":"publish","type":"post","link":"http:\/\/chadorzel.com\/principles\/2014\/06\/16\/on-black-magic-in-physics\/","title":{"rendered":"On Black Magic in Physics"},"content":{"rendered":"<p>The latest in a long series of articles making me glad I don&#8217;t work in psychology was <a href=\"http:\/\/www.theguardian.com\/science\/head-quarters\/2014\/jun\/10\/physics-envy-do-hard-sciences-hold-the-solution-to-the-replication-crisis-in-psychology\">this piece about replication in the Guardian<\/a>. This spins off some <a href=\"http:\/\/www.spspblog.org\/simone-schnall-on-her-experience-with-a-registered-replication-project\/\">harsh criticism of replication studies<\/a> and a <a href=\"http:\/\/www.scribd.com\/doc\/225285909\/Kahneman-Commentary\">call for an official policy requiring consultation with the original authors<\/a> of a study that you&#8217;re attempting to replicate. The reason given is that psychology is so complicated that there&#8217;s no way to capture all the relevant details in a published methods section, so failed replications are likely to happen because some crucial detail was omitted in the follow-up study.<\/p>\n<p>Predictably enough, this kind of thing leads to a lot of eye-rolling from physicists, which takes up most of the column. And, while I have some sympathy for the idea that studying human psychology is a subtle and complicated process, I also can&#8217;t help thinking that if the font in which a question is printed is sufficient to skew the result of a study one way or the other, then maybe these results aren&#8217;t <em>really<\/em> revealing deep and robust truths about the way our brains work. Rather than demanding that new studies duplicate the prior studies in every single detail, a better policy might be to require some variation of things that ought to be insignificant, to make sure that the results really do hold in a general way. <\/p>\n<p>If you go to precision measurement talks in physics&#8211; and I went to a fair number at <a href=\"http:\/\/meeting.aps.org\/Meeting\/DAMOP14\/APS_epitome\">DAMOP this year<\/a>, there will inevitably be a slide listing all the experimental parameters that they flipped between different values. Many of these are things that you look at and say &#8220;Well, how could that make any difference?&#8221; and <em>that&#8217;s the point<\/em>. If changing something trivial&#8211; the position of the elevator in the physics building, say&#8211; makes your signal change in a consistent way, odds are that your signal isn&#8217;t really a signal, but a weird noise effect. In which case, you have some more work to do, to track down the confounding source of noise.<\/p>\n<p>Of course, that&#8217;s much easier to do in physics than psychology&#8211; physics apparatus is complicated and expensive, but once you have it, atoms are cheap and you can run your experiment over and over and over again. Human subjects, on the other hand, are a giant pain in the ass&#8211; not only do you need to do paperwork to get permission to work with them, but they&#8217;re hard to find, and many of them expect to be compensated for their time. And it&#8217;s hard to get them to come in to the lab at four in the morning so you can keep your experiment running around the clock.<\/p>\n<p>This is why the standards for significance are so strikingly different between the fields&#8211; psychologists (and biomedical researchers) are thrilled to see results that are significant at the 1% level, while in many fields of physics, that&#8217;s viewed as a tantalizing hint, and a sign that much more work is required. But getting enough subjects to hit even the 3-sigma level at which physicists become guardedly optimistic would quickly push the budget for your psych experiment to LHC levels. And if you&#8217;d like those subject to come from <a href=\"http:\/\/psyccritiquesblog.apa.org\/2013\/08\/moving-beyond-the-weird-approach-in-psychology.html\">outside the WEIRD<\/a>, well&#8230;<\/p>\n<p>At the same time, though, physicists shouldn&#8217;t get too carried away. From some of the quotes in that Guardian article, you&#8217;d think that experimental methods sections in physics papers are some Platonic ideal of clarity and completeness, which I find really amusing in light of a conversation I had at DAMOP. I was talking to someone I worked with many years ago, who mentioned that his lab recently started using a <a href=\"http:\/\/www.nist.gov\/public_affairs\/releases\/frequency_combs.cfm\">frequency comb<\/a> to stabilize a wide range of different laser frequencies to a common reference. I asked how that was going, and he said &#8220;You know, there&#8217;s a whole lot of stuff they don&#8217;t tell you about those stupid things. They&#8217;re a lot harder to use than it sounds when you hear <a href=\"http:\/\/jilawww.colorado.edu\/YeLabs\/\">Jun Ye<\/a> talk.&#8221;<\/p>\n<p>That&#8217;s true of a lot of technologies, as anyone who&#8217;s tried to set up an experimental physics lab from scratch learns very quickly. Published procedure sections aren&#8217;t incomplete in the sense of leaving out critically important steps, but they certainly gloss over a lot of little details. <\/p>\n<p>There are little quirks of particular atoms that complicate some simple processes&#8211; I struggled for a long time with getting a simple saturated absorption lock going in a krypton vapor cell, because the state I&#8217;m interested in turns out to have hellishly large problems with pressure broadening. That&#8217;s fixable, but not really published anywhere obvious&#8211; I worked it out on my own before I talked to a colleague who did the same thing, and he said &#8220;Oh, yeah, that was a pain in the ass&#8230;&#8221; <\/p>\n<p>There are odd features of certain technologies that crop up&#8211; the frequency comb issue that my colleague mentioned at DAMOP was a dependence on one parameter that turns out to be sinusoidal. Which means it&#8217;s impossible to automatically stabilize, but requires regular human intervention. After asking around, he discovered that the big comb-using labs tend to have one post-doc or staff scientist whose entire job is keeping the comb tweaked up and running properly, something you wouldn&#8217;t really get from published papers or conference talks.<\/p>\n<p>And there are sometimes issues with sourcing things&#8211; back in the early days of BEC experiments, the Ketterle lab pioneered a new imaging technique, which required a particular optical element. They spent a very long time tracking down a company that could make the necessary part, and once they got it, it worked brilliantly. Their published papers were scrupulously complete in terms of giving the specifications of the element in question and how it worked in their system, but they didn&#8217;t give out the name of the company that made it for them. Which meant that anybody who had the ability to make that piece had all the information they needed to do the same imaging technique, but anybody without the ability to build it in-house had to go through the same long process of tracking down the right company to get one.<\/p>\n<p>So, I wouldn&#8217;t say that experimental physics is totally lacking in black magic elements, particularly in small-lab fields like AMO physics. (Experimental particle physics and astrophysics are probably a little better, as they&#8217;re sharing a single apparatus with hundreds or thousands of collaboration members.) <\/p>\n<p>The difference is less in the purity of the approach to disseminating procedures than in the attitude toward the idea of replication. And, as noted above, the practicalities of working with the respective subjects. Physics experiments are susceptible to lots of external confounding factors, but working with inanimate objects makes it a lot easier to repeat the experiment enough times to run those down in a convincing way. Which, in turn, makes it a little less likely for a result that&#8217;s really just a spurious noise effect to get into the literature, and thus get to the stage where people feel that failed replications are challenging their professional standing and personal integrity.<\/p>\n<p>It&#8217;s not impossible, though&#8211; there have even been retractions of particles that were claimed to be detected at the five-sigma level. And sometimes there are debates that drag on for years, and can involve some nasty personal sniping along the way.<\/p>\n<p>The really interesting recent(-ish) physics case that ought to be a big part of a discussion of replication in physics and other sciences is the story of &#8220;supersolid&#8221; helium, where a new and dramatic quantum effect was claimed, then challenged in ways that led to some heated arguments. Eventually, the original discoverers <a href=\"<a href=\"http:\/\/physics.aps.org\/articles\/v5\/111\">&#8220;>re-did their experiments, and the effect vanished<\/a>, strongly suggesting it was a noise effect all along. That&#8217;s kind of embarrassing for them, but on the other had, speaks very well to their integrity and professionalism, and is the kind of thing scientists in general ought to strive to emulate. My sense is that it&#8217;s also more the exception than the rule, even within physics.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The latest in a long series of articles making me glad I don&#8217;t work in psychology was this piece about replication in the Guardian. This spins off some harsh criticism of replication studies and a call for an official policy requiring consultation with the original authors of a study that you&#8217;re attempting to replicate. The&hellip; <a class=\"more-link\" href=\"http:\/\/chadorzel.com\/principles\/2014\/06\/16\/on-black-magic-in-physics\/\">Continue reading <span class=\"screen-reader-text\">On Black Magic in Physics<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8,146,135,19,33,7,70,11,82],"tags":[],"class_list":["post-9446","post","type-post","status-publish","format-standard","hentry","category-academia","category-atoms_and_molecules","category-condensed_matter","category-experiment","category-in_the_news","category-physics","category-psychology","category-science","category-socialscience","entry"],"_links":{"self":[{"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/posts\/9446","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/comments?post=9446"}],"version-history":[{"count":0,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/posts\/9446\/revisions"}],"wp:attachment":[{"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/media?parent=9446"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/categories?post=9446"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/tags?post=9446"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}