{"id":6000,"date":"2012-01-23T12:25:15","date_gmt":"2012-01-23T12:25:15","guid":{"rendered":"http:\/\/scienceblogs.com\/principles\/2012\/01\/23\/notes-toward-a-toy-model-of-th\/"},"modified":"2012-01-23T12:25:15","modified_gmt":"2012-01-23T12:25:15","slug":"notes-toward-a-toy-model-of-th","status":"publish","type":"post","link":"http:\/\/chadorzel.com\/principles\/2012\/01\/23\/notes-toward-a-toy-model-of-th\/","title":{"rendered":"Notes Toward a Toy Model of the Arrow of Time"},"content":{"rendered":"<p>I&#8217;m fairly certain somebody has already done this, because it&#8217;s such an obvious idea. It&#8217;s a little beyond my cargo-cult VPython skills right at the moment, though (I can probably learn to do it, but not right now), and I none of the applets I Googled up seemed to be doing this, so I&#8217;m posting this sketchy description because I&#8217;ve spent some time thinking about it, and might as well get a blog post out of the deal.<\/p>\n<p>So, as we said <a href=\"http:\/\/scienceblogs.com\/principles\/2011\/12\/the_advent_calendar_of_physics_17.php\">back in the holiday season<\/a>, one of the most fundamental concepts in the modern understanding of thermodynamics and statistical physics is the notion of entropy. You can also argue that entropy is in some sense responsible for our perception of time&#8211; that is, the reason we see time marching forward into the future, not backwards into the past is that entropy increases as we move forward, and it&#8217;s the increase in entropy that determines the arrow of time.<\/p>\n<p>We can define entropy using Boltzmann&#8217;s formula:<\/p>\n<blockquote>\n<p><img decoding=\"async\" src=\"http:\/\/scienceblogs.com\/principles\/wp-content\/blogs.dir\/467\/files\/2012\/04\/i-8927c26ba5fbd29e51c237c13f036e58-dec18_entropy.png\" alt=\"i-8927c26ba5fbd29e51c237c13f036e58-dec18_entropy.png\" \/><\/p>\n<\/blockquote>\n<p>which says that the entropy of a given arrangement of microscopic objects (atoms or molecules in a gas, say) is related to the number of possible ways that you can arrange those objects to produce states that are macroscopically indistinguishable. The more states that look the same on a coarse scale, the higher the entropy. This makes the arrow of time a sort of statistical property: entropy tends to increase because it&#8217;s easy for a collection of stuff to randomly move toward a high-entropy state (which you can do lots of ways) but unlikely that a random motion will take you to a low-entropy state (which can only be made a few ways).<\/p>\n<p>Boltzmann&#8217;s idea is simple and powerful, but it can be a little hard to do anything more than qualitative hand-waving with it at the intro level. It&#8217;s kind of hard to explain how you &#8220;count&#8221; microstates of things that are (classically) continuous variables, like the velocities of atoms in a gas, without getting infinite results. <\/p>\n<p>So, here&#8217;s my rough idea, that I might still try to code into a model for my timekeeping class this term: Rather than thinking about continuous variables, let&#8217;s think about a lattice of points that may or may not contain an atom. It&#8217;s easiest to picture in 1-d, where a low-entropy starting state might look something like this:<\/p>\n<blockquote>\n<p>1&nbsp;1&nbsp;1&nbsp;1&nbsp;1&nbsp;0&nbsp;0&nbsp;0&nbsp;0&nbsp;0<\/p>\n<\/blockquote>\n<p>This represents all of the &#8220;atoms&#8221; being on one side of the lattice representing space.. Then you just allow each &#8220;atom&#8221; some probability of moving either left or right to an unoccupied space. So, for example, a few time steps later, the state might look like this:<\/p>\n<p><!--more--><\/p>\n<blockquote>\n<p>1&nbsp;1&nbsp;1&nbsp;1&nbsp;0&nbsp;1&nbsp;0&nbsp;0&nbsp;0&nbsp;0<\/p>\n<\/blockquote>\n<p>This is a state where the one atom on the boundary has shifted right one space. <\/p>\n<p>What does this have to do with entropy? Well, to use Boltzmann&#8217;s formula, we need to define a set of &#8220;macrostates&#8221; of the system that can be made up from the &#8220;microstates&#8221; in multiple ways. For this, we can imagine a &#8220;density&#8221; distribution for our line of numbers, which we&#8217;ll take as the number of atoms in each half-lattice. The total entropy will be the sum of the entropies for each of the halves.<\/p>\n<p>So, for the initial state above, you have five atoms in the five sites of the left half-lattice, which can only be done one way. You also have five vacancies in the right half-lattice, which can also only be done one way. Each of these halves has an entropy of zero (up to some possible additive constant, depending on how you do the counting).<\/p>\n<p>The second state has four atoms on the left, and one on the right. Each of these &#8220;macrostates&#8221; can be put together in one of five ways, so the &#8220;entropy&#8221; for each half is a constant times log(5). This is an increase in the entropy of the system. Some time later, you&#8217;ll have two atoms on the right and three on the left, each of which can be done 20 different ways, so the entropy increases to log(2) for each half. At which point you&#8217;ve hit the maximum entropy.<\/p>\n<p>So, we have a system where a purely random hopping from one spot to another leads to a clear increase in the entropy of the system, without having to put any explicit rules in place to generate that. The nice thing about this is that it&#8217;s purely combinatorical&#8211; even intro students can tally up the possibilities (for small numbers of sites) and see that the entropy as defined by Boltzmann does, indeed, increase.<\/p>\n<p>It should be relatively easy to code this up on a computer, too, at least for somebody having some familiarity with the right tools. (I&#8217;ve never done anything with arrays in VPython, though, which makes this difficult to do right.) This would also allow you to run if for longer times and larger numbers of states. It&#8217;s also easy to extend this to a two-dimensional array,using, say, the number of atoms in each quadrant of a square grid as the macrostates.<\/p>\n<p>The other nice thing about this is that it should make it possible to demonstrate that entropy <em>does<\/em> occasionally decrease&#8211; it&#8217;s perfectly possible for a random fluctuation to take you to a state with lower entropy than the previous state. It&#8217;s just highly unlikely to do so, because there are more ways to move to higher entropy than to move to lower entropy. And, again, it&#8217;s relatively easy to see this because you can readily count the states involved.<\/p>\n<p>So, there&#8217;s my toy model idea, which I&#8217;m sure is not remotely original. I&#8217;ll probably try to cobble together some version of this for use in the later part of my timekeeping class this term (after we get into more modern ideas about relativity and so forth). Though if anybody has such a program lying around, I wouldn&#8217;t object to being sent a working example in one of the tools I&#8217;ve got easy access to (VPython, Mathematica, or potentially MatLab (though I don&#8217;t have that installed at the moment).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I&#8217;m fairly certain somebody has already done this, because it&#8217;s such an obvious idea. It&#8217;s a little beyond my cargo-cult VPython skills right at the moment, though (I can probably learn to do it, but not right now), and I none of the applets I Googled up seemed to be doing this, so I&#8217;m posting&hellip; <a class=\"more-link\" href=\"http:\/\/chadorzel.com\/principles\/2012\/01\/23\/notes-toward-a-toy-model-of-th\/\">Continue reading <span class=\"screen-reader-text\">Notes Toward a Toy Model of the Arrow of Time<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"1","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[451,134,7,11,471],"tags":[],"class_list":["post-6000","post","type-post","status-publish","format-standard","hentry","category-computing","category-course_reports","category-physics","category-science","category-thermostatmech","entry"],"_links":{"self":[{"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/posts\/6000","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/comments?post=6000"}],"version-history":[{"count":0,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/posts\/6000\/revisions"}],"wp:attachment":[{"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/media?parent=6000"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/categories?post=6000"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/chadorzel.com\/principles\/wp-json\/wp\/v2\/tags?post=6000"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}