Simulate This

Rob Knop has another post to which I can only say “Amen!”, this time on the relatioship between simulation and experiment (in response to this BoingBoing post about a Sandia press release):

Can simulations show us things that experiments cannot? Absolutely! In fact, if they didn’t, we wouldn’t bother doing simulations. This has been true for a long time. With experiments, we are limited to the resolution and capabilities of our detectors. In astronomy, for example, we don’t have the hundreds of millions of years necessary to watch the collision of a pair of galaxies unfold. All we can look at, effectively, are snapshots of pairs of galaxies who are in different stages of that dance. Simulations of galaxy interactions, on the other hand, can help us understand what happens when galaxies collide.

However, in the end, if those simulations don’t have any connection to experiment or observational results, they don’t give us any insight at all. The simulations of the colliding galaxies must give us things that are consistent with what we do know from the snapshots we can take, or we will know the simulations are wrong. The physics underneath the simulations must be based on the theories that have been tested against observaitons and experiments.

This can extend to theoretical models more generally, not just computer simulations. Absent some concrete connection with experimentally observed reality, even a really elegant model is nothing but a toy.

The problem here may be a particularly unfortunate choice of wording on the part of the press office at Sandia– the key sentence is “This change in the position of simulations in science — from weak sister to an ace card — is a natural outcome of improvements in computing, Fang says,” and a lot depends on how you take “ace card,” but it’s worth repeating: Experiments are always the key step in science, and nothing will change that.

i-bc34b702798f01b10409f7481ac9dc21-link_donorschoose_small.gif

12 comments

  1. Quoting from The Physics of War, p 17 (pre pub draft)
    “Definition 15 a model is a mapping of reality into comprehensibility.
    A model may always be expressed in informational symbology, which include, but are not limited to, words and mathematics.
    Griff Callahan of Georgia Tech uses the definition:
    “Modeling is creating representations of specific human perceptions of reality, using imitative or analogous physical or abstract systems to serve as a basis for language.” [Callahan 1991]
    In simplest terms, a model is a representation of some aspect of reality in terms which can be absorbed and manipulated by the human mind. In general, a model will only express one facet of reality, although that facet may be complex. Ideally, a model will also be invertible or reversible. Unfortunately, this is not always the case.
    The process of developing a model is known as modeling.
    Definition 16 A simulation, on the other hand, is a tool for expressing the world in understandable terms, constructed from one or more models and a set of logical rules for relating model’s interaction.”
    A simulation can represent no reality that is not contained in the models embodied in the simulation.

  2. The DoD has an office dedicated to nothing but the verification and validation of models and simulations. The process is long and involved, and requires a whole lot of comparing real world data to model and sim output data, over a whole lotta runs. The models and sims are then considered valid for certain applications within certain parameters. A lot of the machines I administer do nothing but run scenarios over and over and over again, and compare the model data to real world data for the software we’ve written, or are doing VV&A on as a non-involved third party (and our software has to be V&V’d by a third party before accreditation as well, we don’t know who will do it, and generally, the VV&A work we do is given to us by the DoD and the originator doesn’t know beforehand that we’ll be the V&V either).

    Of course, we also do a whole lot of modelling and sim work using accredited models and sims as a means of doing some feasability testing on new optics and radar systems before spending tons o’cash on building something that might not even have a remote chance of working the way we want or need.

  3. Also, we use models and sims on real world data for the purposes of tuning targetting and tracking algorithms and to find new ways of using existing systems that we have good working models for.

    One thing I left out above, the VV&A process also specifies margins of error, since there will ALWAYS be some. The real world is chaotic, and modelling something as seemingly simple as say, an exhaust plume (HAHAHAHAHAHAHAHA!) produces very pretty output in a sim, but of course when you launch the thing in open atmosphere, all these other factors start weighing in and making themselves known.

  4. All science is modeling. F=ma is modeling. We take the observed world and try to generalise it, simplify it, and most importantly write it down in such a way that we can make predictions about future events.

    I’m puzzled by the number of people who think that computer modeling is somehow different. The equations are larger, and more complex; we can carry out iterative solutions; but basically all we’re doing is fitting an equation to observed data. This is nothing different than what people have been doing for years.

  5. You are clearly a special interest group.

    Also, it should be noted that with computer simulations it is easier to get the “correct” result with out anyone noticing (a la Diebold), but.

    On a less snarky note, I went to a talk by Phil Anfinrud, National Institutes of Health, a few days ago. Part of his talk was about how they are using very detailed models to come up with protien structures and then generating what this would give under diffraction. They then compare the experimental data, which does not give enough information to find the structure, with the simulated data and think that they can assign structures this way. The advantage is that you don’t need to crystalize the protiens hence are less likly to have induced some sort of non-natural structure.

  6. “Also, it should be noted that with computer simulations it is easier to get the “correct” result with out anyone noticing (a la Diebold), but.”

    I don’t actually see how it’s much more difficult to falsify computational results than experimental.

  7. it IS much more difficult to falsify computational results simply because there are a lot of “trade secrets” involved: various parameters, code, etc.

    It is much easier to take a piece of copper and measure it’s conductivity than to go through the trouble of reproducing the code and all of the parameters used to calculate copper’s fermi surface, for example.

    Very often simulations produce conflicting results and it’s not clear how to reconcile this. It’s also quite possible that they are “both right” within the model/parameters they used.

    My biggest problem with simulations is that a lot of them are rather detached from experiment. Or, they are used to justify already known experimental results after the fact.

    I also get ticked off when people use words “computer experiment”. Such as “Superconducting hydrogen under pressure has been observed for the first time in a computer experiment”.

    People use this in talks too – for example someone would say: “Theory predicts that Au nanoparticles can become catalytic at certain size – and surely enough, we have conducted experiments that confirmed this”. What they omit is that their experiments are “simulations”

  8. “I’m puzzled by the number of people who think that computer modeling is somehow different. The equations are larger, and more complex; we can carry out iterative solutions; but basically all we’re doing is fitting an equation to observed data. This is nothing different than what people have been doing for years.”

    Given enough parameters, you can simulate an elephant. Add a couple of more, and you can make an elephant wag its tail.

    — My PhD advisor

  9. “…I’m puzzled by the number of people who think that computer modeling is somehow different….”

    Part of this might be the level of approximation that is used to make simulations reasonable. In many cases the equation from theory would simply take too long for a computer to do, and so layers of approximation are put on. Naturally, when computers were slower, the approximations were of bigger consequence and results from simulations of even simple systems did not even compare to experiment. Hence why many people probably saw, and still see, simulation in a dim light.

  10. On a less snarky note, I went to a talk by Phil Anfinrud, National Institutes of Health, a few days ago. Part of his talk was about how they are using very detailed models to come up with protien structures and then generating what this would give under diffraction. They then compare the experimental data, which does not give enough information to find the structure, with the simulated data and think that they can assign structures this way. The advantage is that you don’t need to crystalize the protiens hence are less likly to have induced some sort of non-natural structure.

    That sounds dubious to me, but I’m still holding a grudge against the whole field of protein folding simulators for a colloquium talk that I had to sit through about ten years ago.

    (The speaker spent a good 45 minutes talking about the theoretical method they used, and the approximations they made, and the computational method, in thick jargon that I could barely follow, and the at the very end said “So we get this configuration, and when we compare it to experimental results, it doesn’t agree at all. Not even close. We have no idea why.”

    (Worst. Colloquium. Ever.)

  11. This is my perspective as a liquid-state theorist. Basically, in this little corner of science, simulation is an intermediate state between theory and experiment for whatever we come up with.

    Basically, if I come up with some theory of structure and dynamics etc for a model system, I need to do two things.

    1. Compare against experiment. I feel an incredible amount of sympathy here for neutron diffraction jocks, x-ray, nmr, etc because we ask them to get info from the real world that’s incredibly exacting. Any rate, experiment, where possible, is *always* first and foremost.

    2. That said, we also *must* use simulation to compare against, because we also have to check the mathematics. The phrase of art is that a simulation, ***major caveat being, if the simulation is run CORRECTLY***, is “Exact for the Model”. Meaning, for example, that a simulation on a Newtonian system, using as many exact methods as we can afford, is a “best” answer for the model under examination, and thus the **mathematical** test case to compare the particular theory to.

    Finally, Prof. Orzel, given what much of liquid theory is trying to do, I too have sat through all too many protein folding talks. The protein folders know all too well that the vast majority of their techniques are pretty well divorced from the “real” physics. However, and here’s the real reason that simulation is dominant across that and many other fields, **it’s the only recourse, theory hasn’t caught up with them yet. they have no other choice.**

    Thanks for reading,
    Kip

  12. Ponderer, that was John von Neumann your supervisor was quoting (more or less; JvN said 4 parameters would make an elephant, 5 to wag its tail).

Comments are closed.