Peter Woit, Not Even Wrong

The two most talked-about books in physics this year are probably a pair of anti-sting-theory books, Lee Smolin’s The Trouble With Physics, and Peter Woit’s Not Even Wrong, which shares a name with Jacques Distler’s favorite weblog. I got review copies of both, but Not Even Wrong arrived first (thanks, Peter), and gets to be the first one reviewed.

Of course, I’m coming to the game kind of late, as lots of other high-profile physics bloggers have already posted their reviews, and various magazine reviews have been out for months. Peter has collected a bunch of links in various posts. I don’t have a great deal to say about the book that other people haven’t said already.

There are basically two parts to this book: the first is a fairly breezy history of particle physics from the early days of quantum theory up to the present. The second part is the polemical stuff that you read on the blog. They take up roughly equal amounts of text, but the first part requires more effort to read, and the second part will sell more books.

(Further thoughts below the fold:)

The problem with the book is that it’s not terribly clear who it’s written for. The history of particle theory is very concise, but sort of difficult to follow if you don’t have at least a vague idea of what the Standard Model is going in. While Woit makes the admirable decision to avoid peppering the text with equations, he’s unable to avoid quite a bit of mathematical jargon:

The SU(5) GUT also had nothing to say about the Higgs particle or the machansim for vacuum symmetry breaking, and in fact, it made this problem much worse. The vacuum state now needed to break not just the electroweak symmetry but much of the rest of the SU(5) symmetry. Another set of Higgs particles was required to do this, introducing a new set of underdetermined parameters into the theory. Whereas the energy scale of the electroweak symmetry breaking is about 250 GeV, the SU(5) symmetry breaking had to be at the astronomically high scale of 1015 GeV.

All those terms are defined in the text, but even with that, it can be heavy going to get through some of the historical sections. And it only gets worse when string theory enters the picture. I’m not sure a lay reader would have any chance of getting through it, though for someone with at least a minimal knowledge of the state of things, it’s a nice sumamry of the history of particle theory. I’m just not sure how many people like that there are who will want to read the book…

The polemical part is an easier read, though bits of higher math crop up there, too. If you read Peter’s blog, you pretty much know what you’re going to get: string theory makes no predictions, it’s not even a theory, string theorists have an unhealthy dominance over the field of particle physics, etc. His basic case is laid out very clearly and compellingly, and without the petty sniping that often plagues the blog, and if you’d like a concise summary of the problems with the string theory enterprise, this is a good place to find it.

Those two parts are pretty good, as far as they go. The book could have been really excellent with the addition of a third part, providing a compelling alternative to string theory, or at least making a strong case for some alternative theory. Unfortunately, no such part is forthcoming.

That’s both an accurate description, and kind of depressing. As much as string theory appears to be chasing its own ten-dimensional tail, nothing else looks a whole lot better. Smolin’s book presumably will make a stronger argument for Loop Quantum Gravity (I haven’t read it yet), but Woit isn’t really a partisan of any particular alternate theory, which means he doesn’t have an alternative to push, and while he gives brief summaries of some alternative theories, he doesn’t make any of them sound terrifically compelling, either.

Which is kind of his main point, and a big part of his beef with string theory. None of the theories we have at the moment really work, and it’s kind of questionable whether any of them are even on the right track. Real progress may turn out to require dramatically new ideas, but it’s not clear that the current arrangement is going to allow that.

One interesting note, at least to me, was the suggestion late in the book that string theory should become more like pure mathematics:

Mathematicians have a very long history of experience with how to work in the speculative, postempirical mode that [John] Horgan [in The End of Science] calls ironic science. What they learned long ago was that to get anywhere in the long term, the field has to insist strongly on absolute clarity of the formulation of ideas and the rigorous understanding of their implications. […]

To mathematicians, what is at issue here is how strongly to defend what they consider their central virtue, that of rigorously precise thought, while realizing that a more lax set of behaviors is at times needed to get anywhere. Physicists have traditionally never had the slightest interest in this virtue, feeling they had no need for it. This attitude was justified in the past when there were experimental data to keep them honest, but now perhaps there are important lessons they can learn from the mathematicians. To be really scientific, speculative work must be subject to a continual evaluation as to what its prospects are for getting to the point of making real predictions. In addition, every effort must be made to achieve precision of thought wherever possible and always to be clear about exactly what is understood, what is not, and where the roadblocks to further understanding lie.

This is interesting to me (and it always comes back to me, because this is my blog…) because I’m currently team-teaching an introductory physics course with a math professor. I’ve sat through three introductory calculus lectures in the last week, which have reminded me just how much time mathematicians spend on precisely defining terms. I always hated that as a student, which is why I’m an experimental physicist these days, but confronted with some of what’s going on in string theory (both Woit’s description, and some colloquia I’ve seen on the topic), I can sort of see the point.

Which brings me around to the third element that is present in the book (as opposed to the clear alternative theory that I would’ve liked to find): there’s a lot of excellent material in here about the interaction between mathematics and physics over the years. It’s been a troubled relationship, with each side taking a fairly dim view of the other for much of the history of modern physics, and Woit does a very nice job of describing the various falling-outs and reconciliations over the years as the two fields have moved apart and back together. This is largely orthogonal to the physics argument, but in some ways, the story of the interplay between theoretical physics and pur mathematics is the most interesting part of the book.

In the end, this is a fairly idiosyncratic book, with lots of different parts that add up to an interesting but fairly unique and personal look at the state of the field. I’m not sure I would consider it a definitive treatment of the field, but if you know a bit about particle theory, and would like to learn more about its troubled history, it’s a pretty good read.

16 comments

  1. As a layman who has great difficulty balancing his checkbook, I’ve read numerous books and watched many science programs that discuss string theory, mBranes and the like. Outside of the really fun, sci-fi ideas they convey, I have taken from these sources that the only real “proof” these ideas have is that the math suggests that it’s an accurate model of the universe, but that there isn’t a hell of a lot of experimental evidence to back it up.

    I’ve also gathered that only now, or only within the next few years, will we actually have the kind of technology to carry out experiments to confirm or invalidate String Theory et.al.

    Have you, post(ed) a “this is what String Theorists hold” vs. “this is what the critique of String Theorists say” breakdown of the world views? Is there some kind of overriding caveat to take away from the debate?

  2. Chad,

    Thanks a lot for the very accurate review of the book, especially for noticing what I think is the strongest part of the book, the part about the interaction of math and physics. You also noticed something that most reviewers seem to miss, that I don’t at all agree with the common criticism that the problem with string theory is that it lacks a physical idea and is “too mathematical”. The problem is with the physical idea (a string propagating in 10 dimensions), and with the insistence on only looking at mathematics that can be connected up with this. Until some new experimental guidance comes along, pursuing ideas inspired by mathematics is one of few tactics available to theorists.

    The book is written in an unusual way. It was an attempt to see if I could write something that would be of interest both to people with some training in the subject, and lay people without this training. There are sections of the book like the one you quote (and worse, see the material on TQFT), which relatively few will appreciate, but the hope was that a reader willing to skip some sections that were too difficult would still be able to follow the rest and get something out of it. I also decided to stick to trying to just give the simplest, clearest, short explanation of concepts instead of giving much longer, more elaborate attempts to convey the idea.

    I’ve been pleased that even many people without much background seem to enjoy the book. But having a background of at least having read a book like Brian Greene’s would help a lot, and almost anyone who insists on not going forward until they completely understand what they have read is likely to not get through the book.

    As for alternatives, I certainly wish I had some excellent ones to sell. My own ideas about alternatives are highly speculative and preliminary, they’re reflected to some extent in the repeated mention of the role of representation theory in past successes. But I think physics doesn’t need more overhyping of very speculative ideas to the general public, people should have something more solid before trying to sell it to a wide audience.

  3. You won’t get much LQG in Smolin’s book. He wrote a different one a few years back called “Three Roads to Quantum Gravity” that had more explanation. I would say Smolin’s book is more sociology of science.

  4. Theoreticians and mathetmaticians micturating into their own breezes fashionably eschew real world expermentation. Tidal braking increases Earth’s day by 0.0023 sec/century causing lunar recession of 3.8 cm/year. There *must* be more to gravitation than a symmetric Einstein curvature tensor,

    http://en.wikipedia.org/wiki/Einstein-Cartan_theory
    http://arxiv.org/abs/gr-qc/9309027
    http://arxiv.org/abs/gr-qc/0608090
    On a Completely Antisymmetric Cartan Tensor

    Nobody knows if left and right hands fall identically. Nobody has ever looked. Physics does not ever encounter chiral mass arrays. That should change. Somebody should look,

    http://www.mazepath.com/uncleal/lajos.htm
    $100 in consummables, two days’ work, 3×10^(-18) sensitivity

    Few skyscrapers are successfully built starting at the third floor.

  5. Isn’t the basic problem in theoretical physics that the current theories are good enough. In other words there is no experimental evidence or very little that contradicts any of the current physical theories in their domain of applicability. If this is the case why bother trying to alter the theories. Why not just accept them until there is a problem. Why worry about unification when there is no way to test it. Isn’t it basically true that no matter what theory of quantum gravity is produced it is extremely unlikely that we will ever produce any convincing experimental evidence for it in the near future.

  6. Assman,

    Your point is well-taken (and often made) but one should bear in mind that (1) a theory’s domain of applicability is an inherently fuzzy notion, and (2) there are observational domains (in parts of astrophysics and cosmology) where quantum gravity might have a bearing on observations already made or conceivable in the fairly near future.

    Furthermore, in the absence of a good theory, it is not that easy to say which observations might be made or prove to be relevant. What would have been made of the detection of the microwave background in 1965 if no serious thought had already been given to processes in the early universe?

    In my opinion, the main problem is the profligacy of the theoretical alternatives. The usual response to this is to reiterate the need for a larger and more detailed collection of observations against which to test these theoretical alternatives. Instead, I think the focus must be on finding theoretical principles that reduce the number of alternatives, an approach that was characteristic of Einstein in the early part of his career. Of course this returns us to the question of how much inherent arbitrariness there is in nature, that can only be characterized observationally, and how it relates to the law-like features of the physical world. At this point I don’t think there is any way to escape the necessity for deep thought on this question, to which Isaac Newton provided the first clear and fruitful answer.

  7. Isn’t the basic problem in theoretical physics that the current theories are good enough. In other words there is no experimental evidence or very little that contradicts any of the current physical theories in their domain of applicability. If this is the case why bother trying to alter the theories. Why not just accept them until there is a problem.

    I don’t think you can really say that there’s no experimental evidence of physics beyond current theories. If nothing else, we have absolutely no clue what dark matter is, or what’s driving the expansion of the universe. There definitely are some things in heaven and earth that are unaccounted for in our (natural) philosophy.

    It’s true, though, that the evidence of new physics that we do have is fragmentary and not terribly illuminating. It doesn’t really point in any clear direction, other than outward from the Standard Model.

    But it is reason enough to be looking for new theories.

  8. Chad,

    You mentioned dark energy, which is due supposed to the acceleration of the universe to overcome supposed long range gravitational slowing on distant receding mass by the mass nearer to us, but this is a speculative interpretation which can be disproved and replaced with a factual theory that already exists and is suppressed off arXiv by stringers:

    ‘…the flat universe is just not decelerating, it isn’t really accelerating…’ – Prof. Phil Anderson, Nobel Laureate, http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

    This is the key to the current error over dark energy. Now for the fact explaining the data without dark energy.

    The Yang-Mills Standard Model gauge boson radiation: the big bang induced redshift of gravity gauge bosons (gravitons) for long range gravitation means that the gravity coupling strength is weakened over large ranges.

    I’m not talking inverse square law, which is valid for short range, but the value of gravity strength G. I mean that if gauge boson exchange does mediate gravity interactions between masses around the massive expanding universe, those gauge bosons (gravitons, whatever) will be redshifted in the sense of losing energy due to cosmic expansion over vast distances.

    Just as photons lose energy when redshifted, the cosmic expansion does the same to gauge bosons of gravity. This is gravity is weakened over vast cosmic distances, which in turn is why the big bang expansion doesn’t get slowed down by gravity at immense distances. (What part of this can’t people grasp?)

    Hence no dark energy, hence cosmological constant = 0.

    D. R. Lunsford published a mathematical unification of gravity and electromagnetism which does exactly this, proving cosmological constant = 0. I wrote a comment about this on Prof. Kaku’s “MySpace” blog but then he deleted the whole post since his essay about Woit contained some mistakes, so my comment got deleted.

    So, if you don’t mind, I’ll summarise the substance briefly here instead so that the censored information is at least publically available, hosted by a decent blog:

    Lunsford has been censored off arXiv for a 6-dimensional unification of electrodynamics and gravity: http://cdsweb.cern.ch/search.py?recid=688763&ln=en

    Lunsford had his paper first published in Int. J. Theor. Phys. 43 (2004) no. 1, pp.161-177, and it shows how vicious string theory is that arXiv censored it. It makes definite predictions which may be compatible at some level of abstraction with Woit’s ideas for deriving the Standard Model. Lunsford’s 3 extra dimensions are attributed to coordinated matter. Physically, the contraction of matter due to relativity (motion and gravity both can cause contractions) is a local effect on a dimension being used to measure the matter. The cosmological dimensions continue expanding regardless of the fact that, for example, the gravity-caused contraction in general relativity shrinks the Earth’s radius by 1.5 millimetres. So really, Lunsford’s extra dimensions are describing local matter whereas the 3 other dimensions describe the ever expanding universe. Instead of putting one extra dimension into physics to account for time, you put 3 extra dimensions in so that you have 3 spacelike dimensions (for coordinated matter, such as rulers to measure contractable matter) and 3 expanding timelike dimensions (currently each on the order of 15,000 million light-years). (This is anyway how I understand it.)

    Lunsford begins by showing the errors in the historical attempts by Kaluza, Pauli, Klein, Einstein, Mayer, Eddington and Weyl. It proceeds to the correct unification of general relativity and Maxwell’s equations, finding 4-d spacetime inadequate: ‘… We see now that we are in trouble in 4-d. The first three [dimensions] will lead to 4th order differential equations in the metric. Even if these may be differentially reduced to match up with gravitation as we know it, we cannot be satisfied with such a process, and in all likelihood there is a large excess of unphysical solutions at hand. … Only first in six dimensions can we form simple rational invariants that lead to a sensible variational principle. The volume factor now has weight 3, so the possible scalars are weight -3, and we have the possibilities [equations]. In contrast to the situation in 4-d, all of these will lead to second order equations for the g, and all are irreducible – no arbitrary factors will appear in the variation principle. We pick the first one. The others are unsuitable … It is remarkable that without ever introducing electrons, we have recovered the essential elements of electrodynamics, justifying Einstein’s famous statement …’

    He shows that 6 dimensions in SO(3,3) should replace the Kaluza-Klein 5-dimensional spacetime, unifying GR and electromagnetism: ‘One striking feature of these equations … is the absent gravitational constant – in fact the ratio of scalars in front of the energy tensor plays that role. This explains the odd role of G in general relativity and its scaling behavior. The ratio has conformal weight 1 and so G has a natural dimensionfulness that prevents it from being a proper coupling constant – so this theory explains why ordinary general relativity, even in the linear approximation and the quantum theory built on it, cannot be regularized.’

    A major important prediction Lunsford makes is that the unification shows the cosmological constant is zero. This abstract prediction is entirely consistent with the Yang-Mills Standard Model gauge boson radiation: redshifted gauge bosons (for long range gravitation) mean gravity coupling strength is weakened over large ranges.

    Just as photons lose energy when redshifted, the cosmic expansion does the same to gauge bosons of gravity. This is why the expansion doesn’t get slowed down by gravity at immense distances. Professor Philip Anderson puts it clearly when he says that the supernova results showing no slowing don’t prove a dark energy is countering gravity, because the answer could equally be that there is simply no cosmological-range gravity (due to weakening of gauge boson radiation by expansion caused redshift, which is trivial or absent locally, say in the solar system and in the galaxy).

    Nigel

  9. Assman,

    It’s not a problem for engineers. Gravity and the nuclear forces are largely decoupled from one another, so if we’re interested in practical problems we can first solve the gravitational physics, and then solve the particle physics. The problem is a conceptual one: We need both GR and the Standard Model to describe a lot of different phenomena: for instance, the way water flows down hill, and the way that our sun radiates. The problem with this is that the way GR describes reality is incompatible with the way the Standard Model describes reality. It’s not really a big deal, except that it means one or both of our theories is fundamentally wrong.

  10. A.J.,

    “Gravity and the nuclear forces are largely decoupled…”

    The range of the weak nuclear force is limited by the electroweak symmetry breaking mechanism. Within a certain range of the fundamental particle, the electroweak symmetry exists. Beyond that range, it doesn’t because weak gauge bosons are attenuated while electromagnetic photons aren’t.

    The nuclear force is intricately associated with gravitation because mass (inertial and gravitational) arises from the vacuum Higgs field or whatever nuclear mechanism shields the weak nuclear force gauge bosons.

    There are two ways of approaching unification. One way is to look at the particles of the Standard Model, which gives you a list of characteristics of each particle which you then have to try to produce from some mathematical set of symmetry groups such as SU(3)xSU(2)xU(1). However you still need in this approach to come up with a way of showing why this symmetry breaks down at different energy scales (or different distances that particles can approach each other in collisions).

    The other way of approaching the problem is to look at the force field strengths in an objective way, as a function of distance from a fundamental particle:

    Take a look at http://thumbsnap.com/v/LchS4BHf.jpg

    The plot of interaction charge (coupling constant alpha) versus distance is better than interaction charge versus collision energy, because you can see from the graph intuitively how asymptotic freedom works: the strong nuclear charge falls over a certain range as you approach closer to a quark, and this counters the inverse square law which says force ~ (charge/distance)^2, hence the net nuclear force is constant over a certain ranges, giving the quarks asymptotic freedom over the nucleon size. You can’t get this understanding by the usual plots of interaction coupling constants versus collision energies.

    So, all that fundamental force “unification” means is that as you collide particles harder, they get closer together before bouncing off due to Coulomb (and/or other force field) scattering, and at such close distances all the forces have simple symmetries such as SU(3)xSU(2)xU(1).

    Put another way, the vacuum filters out massive, short-range particles over a small distance, which shields out SU(3)xSU(2) and leaves you with only a weakened U(1) force at great distances. It is pretty obvious if you are trained in dielectric materials like capacitor dielectrics, that Maxwell’s idea of polarized molecules in an aether is not the same as the polarized vacuum of quantum field theory.

    Vacuum polarization requires a threshold electric field strength which is sufficient to create pairs in the vacuum (whether the pairs pop into existence from nothing, or whether they are simply being freed from a bound state in the Dirac sea, is irrelevant here).

    The range for vacuum polarization around an electron is ~10^-15 metre: Luis Alvarez Gaume and Miguel A. Vazquez-Mozo’s Introductory Lectures on Quantum Field Theory http://arxiv.org/abs/hep-th/0510040 have a section now explaining pair production in the vacuum and give a threshold electric field strength (page 85) of 1.3 x 10^16 v/cm which is on the order (approximately) of the electric field strength at 10^-15 m from an electron core, the limiting distance for vacuum polarization. Outside this range, there is no nuclear force and the electromagnetic force coupling constant is alpha ~ 1/137. Moving within this range, the value of alpha increases steadily because you see progressively less and less polarized (shielding) vacuum between you and the core of the particle. The question then is what is happening to the energy that the polarized vacuum is shielding? The answer is that attenuated energy is conserved so it is used to give rise to the short range strong nuclear force.

    When you get really close to the particle core, there is little polarized vacuum between you and the core, so there is little shielding of the electromagnetic charge by polarization, hence there is at that distance little shielded electromagnetic field energy available to cause nuclear binding, which is why the strong nuclear charge falls at very short distances.

    As for the electric charge differences between leptons and quarks:when you have three charge cores in close proximity (sharing the same overall vacuum polarization shell around all of them), the electric field energy creating the vacuum polarization field is three times stronger, so the polarization is three times greater, which means that the electric charge of each downquark is 1/3 that of the electron. Of course this is a very incomplete piece of physical logic, and leads to further questions where you have upquarks with 2/3 charge, and where you have pairs of quarks in mesons. But some of these have been answered: consider the neutron which has an overall electric charge of zero, where is the electric field energy being used? By conservation of electromagnetic field energy, the reduction in electric charge indicated by fractional charge values due to vacuum polarization shielding implies that the energy shielded is being used to bind the quarks (within the asymptotic freedom range) via the strong nuclear force. Neutrons and protons have zero or relatively low electric charge for their fundamental particle number because so much energy is being tied up in the strong nuclear binding force, ‘color force’.

    The reason why gravity is so much weaker than electromagnetism is because, as stated in my previous comment, electromagnetism is unifiable with gravity.

    This obviously implies that the same force field is producing gravitation as electromagnetism. We don’t see the background electromagnetism field because there are equal numbers of positive and negative charges around, so the net electric field strength is zero, and the net magnetic field strength is also zero because charges are usually paired with opposite spins (Pauli exclusion) so that the intrinsic magnetic moments of electrons (and other particles) normally cancel.

    However, the energy still exists as exchange radiation. Electromagnetic gauge bosons have 4 polarizations to account for attraction, unlike ordinary photons which only have 2 and could only cause repulsive forces. The polarizations in an ordinary photon are electric field and magnetic field, both orthagonal to one another and the direction of propagation of the photon. In an electromagnetic gauge boson, there are an additional two polarizations of electric and magnetic field because of the exchange process. Gauge bosons are flowing in each direction and their fields can superimpose in different ways so that the four polarizations (two from photons going one direction, and two from photons going the other way) can either cancel out (leaving gravity) or add up to cause repulsive or attractive net fields.

    Consider each atom in the universe at a given time as a capacitor with a net electric charge and electric field direction (between electron and proton, etc) which is randomly orientated relative to the other atoms.

    In any given imaginary line across the universe, because of the random orientation, half of the charge pairs (hydrogen atoms are 90% of the universe we can observer) will have their electric field pointing one way and half will it pointing the other way.

    So if the line lies along an even number of charges, then that line will have (on average) zero net electric field.

    The problem here (which produces gravity!) is that the randomly drawn line will only lie along an even number of charge pairs 50% of the time, and the other 50% of the time it will contain an odd number of charge pairs.

    For an odd number of charges lying along the line, there is a net resultant equal to the attractive force between one pair of charges, say an electron and a proton!

    So the mean net force along a randomly selected line is only 0 for lines lying along even numbers of atoms (which occurs with 50% probability) and is 1 atom strength (attractive force only) for odd numbers of atoms (which again has a 50% probability). The mean residue force therefore is NOT ZERO but:

    {0 + [electric field of one hydrogen atom]} /2

    = 0.5[electric field of one hydrogen atom].

    There are many lines in all directions but the simple yet detailed dynamics of exactly how the gauge bosons deliver forces geometrically ensures that this residue force is always attractive (see my page for details at http://feynman137.tripod.com/#h etc).

    The other way that the electric field vectors between charges in atoms can add up in the universe is in a simple perfect summation where every charge in the universe appears in the series + – + – + -, etc. This looks totally improbable, but in fact is statistically just a drunkard’s walk summation, and by the nature of path-integrals gauge bosons do take EVERY possible route, so it WILL definitely happen. When capacitors are arranged like this, the potential adds like a statistical drunkard’s walk because of the random orientation of ‘capacitors’, the diffusion weakening the summation from the total number to just the square root of that number because of the angular variations (two steps in opposite directions cancel out, as does the voltage from two charged capacitors facing one another). This vector sum of a drunkard’s walk is equal to the mean vector size of one step (the net electric field of one atom at an instant) times the square root of the number of steps, so for ~10^80 charges known in the observable size and density of the universe, you get a resultant of (10^80)^0.5 = 10^40 atomic electric field units. This sum is always from an even number of atoms in the universe, so the force can be either attractive or repulsive in effect (unlike a residual from a single pair of charges, ie, and odd number of atoms [one atom], which is always attractive).

    The ratio of electromagnetism to gravity is then ~(10^40) /(0.5), which is the correct factor for the difference in alpha for electromagnetism/gravity. Notice that this model shows gravity is a residual line summation of the gauge boson field of electromagnetism, caused by gauge bosons.

    The weak nuclear force comes into play here.

    ‘We have to study the structure of the electron, and if possible, the single electron, if we want to understand physics at short distances.’ – Professor Asim O. Barut, On the Status of Hidden Variable Theories in Quantum Mechanics, Aperion, 2, 1995, pp97-8. (Quoted by Dr Thomas S. Love.)

    In leptons, there is just one particle core with a polarized vacuum around it, so you have an alpha sized coupling strength due to polarization shielding. Where you have two or three particles in the core all close enough that they share the same surrounding polarized vacuum field out to 10^-15 metres radius, those particles are quarks in mesons (2 quarks) and baryons (3 quarks).

    They have asymptotic freedom at close range due to the fall in the strong force at a certain range of distances, but they can’t ever escape from confinement because their nuclear binding energy far exceeds the energy required to create pairs of quarks. The mass mechanism is illustrated with a diagram at http://electrogravity.blogspot.com/2006/06/more-on-polarization-of-vacuum-and.html

    When a mass-giving black hole (gravitationally trapped) Z-boson (this is the Higgs particle) with 91 GeV energy is outside an electron core, both its own field (it is similar to a photon, with equal positive and negative electric field) and the electron core have 137 shielding factors, and there are also smaller geometric corrections for spin loop orientation, so the electron mass is: [Z-boson mass]/(3/2 x 2.Pi x 137 x 137) ~ 0.51 MeV. If, however, the electron core has more energy and can get so close to a trapped Z-boson that both are inside and share the same overlapping polarised vacuum veil, then the geometry changes so that the 137 shielding factor operates only once, predicting the muon mass: [Z-boson mass]/(2.Pi x 137) ~ 105.7 MeV. The muon is thus an automatic consequence of a higher energy state of the electron. As Dr Thomas Love of California State University points out, although the muon doesn’t decay directly into an electron by gamma ray emission, apart from its higher mass it is identical to an electron, and the muon can decay into an electron by emitting electron and muon neutrinos. The general equation the mass of all particles apart from the electron is [electron mass].[137].n(N+1)/2 ~ 35n(N+1) Mev. (For the electron, the extra polarised shield occurs so this should be divided by the 137 factor.) Here the symbol n is the number of core particles like quarks, sharing a common, overlapping polarised electromagnetic shield, and N is the number of Higgs or trapped Z-bosons. Lest you think this is all ad hoc coincidence (as occurred in criticism of Dalton’s early form of the periodic table), remember we have a mechanism unlike Dalton, and we below make additional predictions and tests for all the other observable particles in the universe, and compare the results to experimental measurements: http://feynman137.tripod.com/ (scroll down to table).

    Comparison of mass formula, M = [electron mass].[137].n(N+1)/2 = [Z-boson mass].n(N+1)/[3 x 2Pi x 137] ~ 35n(N+1) shows that this predicts an array of masses for integers n (number of fundamental particles per observable particle core) and N (number of trapped Z-bosons associated with each core). It naturally reproduces the masses of all the observed particles known to within a few percent accuracy.

    So the complete linkage between gravity and nuclear force shows that inertial and gravitational mass is contributed to particles by the vacuum. The link between the Z-boson mass is the key in the relationship is straightforward: the Z-boson is an analogy to the photon of electromagnetism. Rivero and another noticed and published in an arXiv paper a simple relationship between the Z-boson mass and another particle mass via alpha = 137.

    The argument that alpha = 137 is polarized vacuum shielding factor for the idealised bare core electric charge is given at

    Heisenberg’s uncertainty says [we are measuring the uncertainty in distance in one direction only, radial distance from a centre; for two directions like up or down a line the uncertainty is only half this, i.e., it equal h/(4.Pi) instead of H/(2.Pi)]:
    pd = h/(2.Pi)
    where p is uncertainty in momentum, d is uncertainty in distance.This comes from his imaginary gamma ray microscope, and is usually written as a minimum (instead of with “=” as above), since there will be other sources of uncertainty in the measurement process.
    For light wave momentum p = mc,pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc^2), and t is uncertainty in time.

    Hence, Et = h/(2.Pi)
    t = h/(2.Pi.E)
    d/c = h/(2.Pi.E)
    d = hc/(2.Pi.E)
    This result is used to show that a 80 GeV energy W or Z gauge boson will have a range of 10^-17 m. So it’s OK to take d as not merely an uncertainty in distance, but an actual range for the gauge boson!
    Now, E = Fd implies
    d = hc/(2.Pi.E) = hc/(2.Pi.Fd)
    Hence
    F = hc/(2.Pi.d^2)

    This force between electrons is 1/alpha, or 137.036, times higher than Coulomb’s law for unit fundamental charges. The reason:

    “… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).” – arxiv hep-th/0510040, p 71.

    Another clear account of vacuum polarization shielding bare charges: Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

    ‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

    ‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

    ‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

    ‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

    ‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ – m and e’ – e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.’All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu_zero for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu_zero.(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu_zero = 1.001…’

    The response of most people to building on these facts is completely bewildering, they don’t care at all despite the progress made in understanding and making physical predictions of a checkable kind.

    At the end of the day, the string theorist and the public don’t want to know the facts, they prefer the worship of mainstream mathematical speculations which don’t risk being experimentally refuted.

  11. Hence, Et = h/(2.Pi)
    t = h/(2.Pi.E)
    d/c = h/(2.Pi.E)
    d = hc/(2.Pi.E)
    This result is used to show that a 80 GeV energy W or Z gauge boson will have a range of 10^-17 m. So it’s OK to take d as not merely an uncertainty in distance, but an actual range for the gauge boson!
    Now, E = Fd implies
    d = hc/(2.Pi.E) = hc/(2.Pi.Fd)
    Hence
    F = hc/(2.Pi.d^2)

    Makes perfect sense to me.

    (Runs screaming from the room.)

  12. A lay person with say an engineering/science/math bachelors is certainly capable of getting the general idea of SU(5) GUT but it does require reading beyond Discover Magazine. I personally think the math of SU(5) or SO(10) or E6 GUTs is great but the problems may be in the details of the physics matched to the math degrees of freedom as things get messed up by dimensional reduction/symmetry breaking. SU(5) is great for bosons, SO(10) is great for spacetime and E6 is great for fermions. John Baez had a fairly recent paper about SO(10) which seemed like a nice math place to restart at. For SU(5) GUT it would also be nice if someone could get the proton decay rate accurately measured (for someone like me with an electrical engineering bachelors, some physics experiments seem to have scary signal to noise ratios). Cosmological data like the Pioneer anomaly and dark matter/dark energy/ordinary matter ratios could gice nice clues too. Going up math-wise from E6 to quantum gravity could be easier with the bosonic string (Susskind and Smolin have written about a bosonic M-theory). Secretly I’ve been trying to talk about Tony Smith’s model without mentioning Tony’s name. Shhhhhh.

  13. Protons don’t seem to decay, which is I gather at the heart of the troubles of particle physics today. Their unification of forces would go swimmingly if protons decayed.

    Personally, I’m glad of it. Not that I wish to distress physicists but if protons decayed at any observable rate at all then we’d have to choose between (a) accepting an outward bound for the survival of the material universe, or (b) imagining some mechanism by which they could be replaced.

    Since they haven’t been seen to decay, we don’t have to do either. Not on this grounds, anyway. There’s still Big Bang theory to contend with on the cosmological level, and that requires a “heat death.”

    Still, the unexpected hardiness of protons gives us grounds for hope that something is wrong with the regnant theories there, too, and that something akin to the old steady state model might turn out to be right.

Comments are closed.