The big physics story at the moment is probably the new measurement of the size of the proton, which is reported in this Nature paper (which does not seem to be on the arxiv, alas). This is kind of a hybrid of nuclear and atomic physics, as it’s a spectroscopic measurement of a quasi-atom involving an exotic particle produced in an accelerator. In a technical sense, it’s a really impressive piece of work, and as a bonus, the result is surprising.
This is worth a little explanation, in the usual Q&A format.
So, what did they do to measure the size of a proton? Can you get rulers that small? They use a particle accelerator to create atoms of “muonic hydrogen,” which are just like hydrogen atoms, but with the electron replaced by a muon, an exotic particle that’s just like an electron but about 200 times heavier. Once the atoms were created, they used lasers to measure the “Lamb shift,” which is the very small energy difference between two levels in hydrogen.
How does that tell you anything about the proton? Aren’t the energy levels related to the orbit of the electron? The Lamb shift is an extremely important phenomenon in the history of quantum physics, because the simplest version of quantum theory predicts that these two levels, the 2S and 2P states, ought to have exactly the same energy. The fact that they don’t, as discovered by Lamb and Retherford in 1947, was the first clear indication of a need to move beyond the simplest version of quantum theory, and led to the development of Quantum Electro-Dynamics (QED).
The size of the Lamb shift depends on a bunch of factors, but in normal hydrogen is mostly due to modifications of the electron-proton interaction by “virtual particles” in QED. There’s a small contribution due to the size of the proton, though, and that contribution gets a lot bigger when you replace the electron with a muon.
How does the size of the proton matter? Isn’t it, like, 10,000 times smaller than the electron orbit? The proton is, indeed, really tiny, but its size is not zero. And that makes a difference when you look at how these two states behave.
The two states separated by the Lamb shift are states of different angular momentum– the 2S state has zero angular momentum, while the 2P state has one unit of angular momentum. This leads to a significant difference in the wavefunctions of the two states, with the 2S state spending much more time close to the proton than the 2P state does.
Angular momentum? I thought that was just about gyroscopes and ice skaters? What’s it got to do with atoms? In classical physics, the angular momentum of a moving object depends on two things: how fast it’s moving, and how far it is away from the central point. Two objects moving at the same speed can have very different angular momenta, if one is orbiting close in to a central point while the other is much farther out. The object that is farther away has more angular momentum.
The states of electrons in atoms are not really like planetary orbits, but much of the physics carries over in a conceptual way. The 2S state has zero angular momentum, while the 2P state has one unit of angular momentum, and that means that the 2P state is, on average, farther away from the center of the atom than the 2S state (the two states have the same “speed” because they’re both 2 states). In fact, if you look at the probability of finding the electron exactly at the center of the atom, where the proton is, the probability is zero for the 2P state, but non-zero for the 2S state.
When you take the finite size of the proton into account, that means that an electron in the 2S state will spend more of its time close enough to the proton to see that it’s not a point, but a small sphere of charge (more or less). That changes the interaction energy between the electron and the proton, which in turn changes the total energy of the state. This leads to a shift of the 2S energy compared to the 2P energy, and contributes to the Lamb shift.
So, the Lamb shift is due to the finite size of the proton? I said it contributes to the shift. It’s actually a really tiny contribution in hydrogen– the vast majority of the shift measured by Lamb and Retherford, and in numerous experiments since, is due to other effects. The proton size is part of the shift, though, and is actually one of the main sources of uncertainty in current theoretical calculations of the Lamb shift in hydrogen.
Why is it so small? Becuase, as you noted earlier, the proton is around 10-15 meters across, while the electron orbits are around 10-10 meters across. The region where the proton size matters for the interaction is so small that it’s almost negligible. It’s only because modern spectroscopic methods are so utterly amazing that you can see any contribution to the shift at all.
So this is where the muons come in? Right. A muon is roughly 200 times the mass of an electron, which means its orbit is roughly 200 times smaller than that of an electron. Which leads to a corresponding increase in the size of the proton size contribution. When you replace the electron with a muon, and measure the energy splitting between the 2S and 2P states of this muonic hydrogen, the Lamb shift analogue has a much larger contribution from the proton size. If you know the rest of the effects (and we understand QED pretty well), then you can work out the size of the proton by measuring the size of the shift.
How do they measure the size of the shift? They use laser spectroscopy, and take advantage of the fact that the energy shift is also vastly larger than the Lamb shift in ordinary hydrogen (again, because of the larger mass and smaller orbit). The Lamb shift corresponds to an frequency in the microwave range of the spectrum, but the Lamb shift in muonic hydrogen is in the far infrared range, at around 6 microns. That’s an inconvenient wavelength, but one that can be generated with pulsed lasers.
So, they just shine the alser in, and see if it gets absorbed? Actually, they shine the infrared laser in, and look for x-rays coming out. The way it works is that a small fraction of the muonic hydrogen they produce ends up in the 2S state. That state has a ridiculously long lifetime– limited mostly by the fact that the muons only last two microseconds before they decay– so once they’re in that state, they stick around. A short time after the atoms are created, they blast in a pulse of laser light with its frequency tuned close to the frequency corresponding to the splitting between the 2S and 2P states. The 2P state has a very short lifetime, so any atoms that get excited by the laser will decay very quickly, and emit an x-ray in the process (because the energy difference between the ground state and the 2P state is enormous, thanks to the heavy muon).
When the laser is tuned to exactly the right frequency, they see lots of x-rays from decaying atoms. When it’s a little bit off, the number of x-rays drops off dramatically. Then they just need to measure the laser frequency, and they get the Lamb shift directly. And with a bit of math, they can convert that to a measurement of the proton size.
And this is a good measurement? It’s a phenomenally good measurement. The uncertainty they report in the size is just 0.00067 femto-meters, compared to 0.0069 femto-meters for the best previous measurement. That’s a full order of magnitude better, which is an impressive jump for something this tricky.
What’s the catch? The catch is that their result doesn’t agree with the previous value. The best previous measurement gives the size as 0.8768 fm, while this measurement gives 0.84184 fm. Even taking the uncertainties into account, these do not agree with each other. The difference is about five times the uncertainty, which just shouldn’t happen.
So, protons are way smaller than we thought? Yes and no. This measurement suggests that the size is smaller than that measured in previous experiments, but the difference isn’t all that big in absolute terms. And it’s possible that there’s some effect they haven’t accounted for properly in making this measurement.
What sort of effect? Well, if they knew that, they would’ve accounted for it. There’s a fearsome amount of theory going into the conversion from Lamb shift to proton size, so it’s possible that something there is a little bit off. It’s also possible that there’s something wrong experimentally– this is the first time anybody has ever done laser spectroscopy of muonic hydrogen, so they might’ve overlooked something.
Or it could be completely new physics.
What’s your guess as to the reason? I’m inclined to think it’s in the theory somewhere, but that’s mostly because I’m an experimentalist by inclination and training. There’s an awful lot of theoretical stuff going into the conversion, and it wouldn’t surprise me if six months from now, somebody discovers a small tweak that brings this measurement into line with the others, or brings the other ones in line with this measurement. It’s a whopping huge error as such things go– 64 times the uncertainty they think is associated with the theory– but I wouldn’t be too surprised if that turns out to be unduly optimistic. They mention some other determination that gives results more in line with their result, which may point to something.
The experiment that they’re doing here is really pretty clean, and there aren’t too many factors that need to be accounted for. They seem to have most of those under control, at least from the uncertainty estimates they give for the obvious possible experimental shifts.
“New physics” is obviously the most exciting possibility, here, but it would be really surprising. The physics going into this is really just QED, which is one of the best-tested theories in the history of science. It would be really surprising if QED turned out to be that far wrong, though I’m sure the hep-th arxiv will see a flood of papers proposing one scheme or another for coming up with this kind of result (a new kind of dark-matter particle that couples only to muons, not electrons, or some such).
Whatever it is, it’ll be interesting to see how this plays out.
Pohl, R., Antognini, A., Nez, F., Amaro, F., Biraben, F., Cardoso, J., Covita, D., Dax, A., Dhawan, S., Fernandes, L., Giesen, A., Graf, T., Hänsch, T., Indelicato, P., Julien, L., Kao, C., Knowles, P., Le Bigot, E., Liu, Y., Lopes, J., Ludhova, L., Monteiro, C., Mulhauser, F., Nebel, T., Rabinowitz, P., dos Santos, J., Schaller, L., Schuhmann, K., Schwob, C., Taqqu, D., Veloso, J., & Kottmann, F. (2010). The size of the proton Nature, 466 (7303), 213-216 DOI: 10.1038/nature09250
One thing I could not figure out: what is the size of the proton compared to when it is declared âsmallerâ? the same measurement done previously, or the expected theoretical result, or to a completely different definition and measurement of the âsizeâ of the proton?
(The last option would disturb me less, sizes of complicated objects are ambiguous things, and could very well be probe-dependent.)
The key point in the finite size contribution to the Lamb shift is not just that the lepton spends time close to the proton, but that it spends time inside the proton. Inside the proton, the proton’s electric field weakens, and so leptons that can penetrate the proton are slightly less strongly bound than they would be by a point charge.
People have been doing measurements like this for quite some time, and the precision is consistently impressive. However, there is usually substantial disagreement between results. While the experiments tend to be interpreted in a model that considers the proton to be basically a uniform ball of charge (sometimes plus some corrections), the large discrepancies suggest that such a model is insufficient, even at very low atomic energies.
Thanks for this excellent summary 🙂
(Seems you forgot to close a bold-face tag somewhere)
More not-so-well-informed commentary:
I spent a while yesterday sifting through the papers they cite, but I’m still trying to wrap my head around the sizes of various corrections. For instance, there’s a fair amount of literature on how the proton polarizability affects the Lamb shift. But the discrepancy they find is huge, not just in statistical significance but in absolute terms — 4% — so my initial thought it’s hard to deconvolve the effects of other QCD corrections (polarizability, vacuum polarization) doesn’t seem right. Those effects are smaller. If there’s a problem in the theory it seems like it has to be in one of the easier parts of the theory.
My guess is that this experiment is right and the old experiments on ordinary hydrogen got their error bars wrong. (The e-p scattering experiments are even fishier, since they work in regime where the charge radius times momentum is order 1, so disentangling the different terms in the form factor is difficult.)
There’s almost no room for new physics to explain this.
This post is just one “bunnies made of CHEEESE?” away from being a conversation with your dog. Which is to say, interesting, easy to read and very informative.
Moshe: they’re all measurements of the proton charge radius (i.e., the EM form factor is F(q^2) = 1 – q^2/6 + O(q^4) and they’re extracting the coefficient). The discrepancy is between this measurement and previous measurements (all of which are much less precise). The preferred value that CODATA tabulated, which disagrees by 5 sigma, is mostly based on measurements of ordinary hydrogen, where it’s understandably difficult to see the effect of the charge radius, since the electron’s orbit is so big. There are also e-p scattering results that give a value 3 sigma higher.
Oops, I tried to use angle brackets in HTML and didn’t preview. Trying again:
The EM form factor is 1 – q^2/6 + O(q^4) and they’re extracting the coefficient.
That time I previewed! And it looked fine! And then when it posted it doesn’t. Weird.
Dropping the angle brackets:
The EM form factor is 1 – r^2 q^2 /6 + O(q^4) and they’re extracting the r^2 coefficient.
Please excuse my biologist’s perspective here, but …
The anthropic fine tuning argument is usually posed as “If the fundamental constants were even slightly off from what we observe, then universe as we know it couldn’t exist.”
Doesn’t the fact that we could have 4% slop in the size of the proton, yet still have a standard theory that works so well for so many things, mean that all of the shouting about anthropic fine tuning is utterly wrong?
The key point in the finite size contribution to the Lamb shift is not just that the lepton spends time close to the proton, but that it spends time inside the proton. Inside the proton, the proton’s electric field weakens, and so leptons that can penetrate the proton are slightly less strongly bound than they would be by a point charge.
This is what I get for writing these things at 11pm. The first “close to the proton” ought to be “close to and even inside the proton.”
The anthropic fine tuning argument is usually posed as “If the fundamental constants were even slightly off from what we observe, then universe as we know it couldn’t exist.”
Doesn’t the fact that we could have 4% slop in the size of the proton, yet still have a standard theory that works so well for so many things, mean that all of the shouting about anthropic fine tuning is utterly wrong?
No.
The fundamental constants that people talk about when they talk about “fine-tuning” are typically dimensionless ratios of other constants– the ratio of proton to electron masses, or the ratio of the electron charge, Planck’s constant and the speed of light known as the “fine structure constant.” Small changes in these would lead to enormous changes in the structure of atoms and nuclei, with disastrous consequences for (our sort of) life.
The size of the proton is not a fundamental constant in this category. It almost never comes up (note that Moshe, who is a terrifically smart particle theorist, isn’t even sure what’s meant by the term– that’s how unimportant it is in the grand scheme of things), and in the limited number of situations where it does matter, it leads to tiny, tiny shifts that only turn up in precision measurements.
It is a property of a fundamental(-ish) particle, but not one that enters into the fine-tuning arguments. Thus, uncertainty in the proton charge radius doesn’t affect the fine-tuning argument at all.
While the experiments tend to be interpreted in a model that considers the proton to be basically a uniform ball of charge (sometimes plus some corrections), the large discrepancies suggest that such a model is insufficient, even at very low atomic energies.
Admittedly, I haven’t studied the theory behind this experiment in any detail, but I’m guessing that they don’t actually assume a uniform sphere. More likely they assume a spherically symmetric (or at least symmetric when time-averaged) charge distribution and calculate a moment of the distribution. That moment can be used to infer a length scale (probably a Radius of Gyration, in formal terms) but that’s a bit different from the radius of a uniform sphere. That’s how perturbation calculations usually work.
@Alex — Mathematically, it doesn’t really matter whether you actually assume a uniform ball of charge, but that’s usually how the people doing these experiments describe their results to general physics audiences. The form factor that they actually end up studying is sketched out over the course of three comments above. However, when they quote a radius, I believe that is the radius of a uniformly charged sphere that has the correct form factor.
I suppose the key point is not that the measurement of proton “size” has much intrinsic significance, but that the apparent discrepancy may reveal a problem with QED. At present, this seems like the least likely outcome.
With regard to fine-tuning, there are also many other absolute physical constants upon whose values the existence of (our sort of) life is rather insensitive. In fact, without wanting to derail the thread, the distribution of absolute fundamental constants, and therefore any dimensionless combination of them, can be shown to follow a 1/f law irrespective of any system of units. This is exactly what would be expected for a set of “fundamental” numbers that are unrelated to each other – they are randomly distributed.
One part of this does not surprise me. Measurements of the charge distribution of the proton were done fairly crudely by the highest energy physics community when those measurements were in the first-ever win-a-Nobel territory at SLAC. It is rarely fashionable to improve on measurements like that, but I’m pretty sure that the Bates and Jefferson labs did runs with polarized beams and H targets at electron energies that probe well below the sizes seen in this experiment. Pretty sure, but not confident.
So is this a claim that the electron scattering data are wrong because the electron alters the proton (quark) charge distribution? If so, they need to explain how they modeled the larger effect of a muon on the quark distribution.
Or is it a claim that there is a systematic error in multiple sets of experimental data taken at different laboratories, perhaps because those experiments were done casually without great attention to that sort of detail?
I haven’t looked at the paper, so I don’t know how they parametrized the proton charge distribution for their calculation and/or if it agrees in detail with what is known from experiments done by the particle and nuclear physics community. That would be the first question I would ask. There are some commonly used shapes that only agree on the first non-trivial term in the q^2 series.
@10: The “radius of the proton” normally means the rms radius of the charge distribution determined from the electric form factor. Qualifiers are used when discussing the radius determined from the magnetic form factor.
I should have had one other item on my list above, which is the possibility that the e-p measurements are OK (meaning consistent) but sloppy and the real discrepancy is between the atomic H and muonic H calculations of the radius, perhaps magnified by the imprecision of the charge form factor. The mention of new e-p experiments (or analysis of old ones that were never published?) suggests it might be a combination of all of these.
Does the energy state of the proton contribute in any way to the measurement, or more specifically to the error margin?
I should have had one other item on my list above, which is the possibility that the e-p measurements are OK (meaning consistent) but sloppy and the real discrepancy is between the atomic H and muonic H calculations of the radius, perhaps magnified by the imprecision of the charge form factor. The mention of new e-p experiments (or analysis of old ones that were never published?) suggests it might be a combination of all of these.
I think this is closest to the real story. The reference to other experiments/ theory is this sentence:
“Dispersion analysis of the nucleon form factors has recently32 also produced smaller values of rpâââ[0.822…0.852]âfm, in agreement with our accurate value.”
which points to this paper which appears to be some sort of re-analysis of other people’s scattering data. The abstract contains the following:
“We simultaneously analyze the world data for all four form factors in both the spacelike and timelike regions and generally find good agreement with the data. We also extract the nucleon radii and the ÏNN coupling constants. For the radii, we generally find good agreement with other determinations with the exception of the electric charge radius of the proton, which comes out smaller.”
It’s not a completely solid explanation– the Lamb shift measurements in ordinary hydrogen are the culmination of a series of many steadily improving measurements over a period of many years, so saying “well, those should have bigger error bars” isn’t a trivial matter. The experiments are really pretty solid (I know some people who did a Lamb shift measurement, and they were very careful and thorough). It’s possible that there’s some factor in the theoretical calculation used to convert from Lamb shift to proton radius in those measurements, but those calculations are fairly central to QED, and it would be pretty surprising for them to be that far wrong.
But as improbable as that may be, that may well be more likely than new physics at the necessary level to explain this discrepancy.
This statement on the blog is quantum mechanically incorrect: “…the 2P state is, on average, farther away from the center of the atom than the 2S state.” In fact, it is the reverse (from a semi-classical point of view, one would say that the orbit is becoming more circular as the angular momentum increases).
One clarification question — you cite two numbers. The current study, with muons, comes up with a size of 0.84184±0.00067 fm. Previous studies come up with a size of 0.8768±0.0069 fm. Those previous studies — they were using standard electron Hydrogen atoms, or other methods? Or are those previous studies making the same measurement, with muonic Hydrogen?
It’s not a completely solid explanation– the Lamb shift measurements in ordinary hydrogen are the culmination of a series of many steadily improving measurements over a period of many years, so saying “well, those should have bigger error bars” isn’t a trivial matter. The experiments are really pretty solid (I know some people who did a Lamb shift measurement, and they were very careful and thorough).
I don’t doubt the accuracy of the hydrogen Lamb shift measurement, just of the inferred value of the charge radius extracted from it.
It’s possible that there’s some factor in the theoretical calculation used to convert from Lamb shift to proton radius in those measurements, but those calculations are fairly central to QED, and it would be pretty surprising for them to be that far wrong.
It’s not so clear to me; there are a lot of obscure higher-order effects in the literature that I don’t trust to be so well-understood, but this does look awfully large to be attributed to any of those. On the other hand, ordinary hydrogen really isn’t very sensitive to the proton size (the proton radius divided by the Bohr radius is something like 10^-5), so pretty small corrections can matter.
Another issue that I’m not sure about is that the proton electric form factor is actually a little ill-defined since the photon is massless; in the limit where you turn off the QED gauge coupling, it’s a well-defined quantity (which in principle can be computed by lattice QCD, though the latest numbers I’ve found have error bars too big to be of interest here), but there are IR divergences in general. Presumably there’s a standard way of dealing with this, but I haven’t dug it up yet.
The prior studies that got the larger value are studies of normal hydrogen, not muonic hydrogen. They’re using laser spectroscopy of ordinary atoms, with no accelerators needed.
Thanks onymous, your comment and the rest of the discussion make it clear what are those earlier results implicit in the original discussion, and their relations to the new measurement.
Because muons are smaller because they are more massive, and because they spend more time inside the proton, doesn’t the negative charge of the muon while it is inside the proton make the charge radius of the proton smaller (while the muon is inside it)?
Sorry I didn’t have time to read all the comments but I note and query: isn’t the size of the proton more than an abstraction about things like Lamb shift would show, in the sense of being somewhat literally for packing purposes such as density of neutron stars? (And to a more subtle extent, cross-section for absorptions etc?) It is at least somewhat like the effective physical radii of atoms and molecules, no?
Was the effect, which has served for spectroscopic determination of deuterium in 1932 considered?
http://www.marts100.com/deuterium.htm
While in grad school, I drew a short lived comic strip about a sheep that drove a race-car called “Lamb Shift: Racing Sheep”.
It wasn’t funny then either.
I am puzzled a bit by the error bar for the new, smaller rms proton charge radius claimed in the paper. Actually, this radius r is calculated from the relation
ÎE_Lamb = E_0 â A r² + B r³,
where ÎE_Lamb is the measured Lamb shift, and the constants E_0, A, and B result from QED calculations. There is a freely available supplement to the paper which explains how these constants have been obtained, and which QED terms have been included in the calculation.
Now, while the paper discusses the error bars of the measured ÎE_Lamb and of the constant E_0 (which has error bars due to uncertainties by approximations etc…), the constants A and B seem to be used without error bars. But, actually, the supplement quotes two calculations of A which differ at the per-mille level, so there still seems to be a corresponding ambiguity in the calculations.
Now, I am wondering, how can one ignore this uncertainty (it seems to be ignored in the paper, at least), which is one order of magnitude larger than the uncertainties in ÎE_Lamb and E_0, in the calculation of r?
What am I missing here?
The Tau, Muon and electron all operate in a dynamic parameter, so that each time you use these to measure the proton, it should not be the same (now that we have the ability to measure at such a critical value)The dynamics of the Leptons should create a variable within parameters that we now are able to see.
It looks like there will be a high and low value to the proton using the method of laser spectroscopy, what may be needed is more confinement of the leptons utilised.
But then what do I know.
Question from a dopeydollop who don’t know much but is interested in this and wants to know more.
According to the searches I’ve done so far the proton size can be calculated from the Rydberg constant.Is this correct are, there any equations that give the proton size and if there are where can I search to find these equations?Thanks.
@Tex: The anthropic fine tuning argument is usually posed as “If the fundamental constants were even slightly off from what we observe, then universe as we know it couldn’t exist.”
I *really* don’t like that phrasing. A better one is “If the fundamental constants were even slightly off from what we observe, then we wouldn’t be here to observe them.”
Universes that don’t have physical constants enabling conscious observers to evolve will never be witnessed by conscious observers. And any conscious observers evolve in a universe that permits them will be precisely those possible under whatever the extent constants are.
IOW: It isn’t about how fine tuned the hole is to the puddle but how the puddle fills whatever hole happens to exist.
@Jim (#18): OK, technically you’re correct, if you compute [r] (brackets stand for expectation value) for Psi_200 and Psi_210, you get 6a_0 for the 2S state and 5a_0 for the 2P state (assuming I did my arithmetic correctly). But if you’re only interested in time spent near the proton, you don’t want to let the radial integral go to infinity. If you cut it off at some small value, say a_0, then you get [r]_2S >> [r]_2P, which was the point.
Oops, that last bit should read P(r> P(r
Argh character problems again: “the probability for r less than a_0 in the 2S state is much bigger than the probability for r less than a_0 in the 2P state”
@17: Thanks for looking that up in the paper and including the link to reference 32.
That reference is really interesting. The abstract suggests that theoretical (based on decades of non-leptonic scattering data), rather than experimental (that is, a WAG about where to draw the background) treatment of the continuum introduced a non-trivial systematic error in the e-p data. This does not surprise me based on my past experience with (mumble). I’m going to have to go read the entire article.
Actually it does. An entire class of energy devices relies on the movement of ions (Batteries, Fuel cells etc) including protons. The fastest moving ions are the protons due to their small size.
Re: Comment #35
Proton mobility in a battery / fuel cell / chemical process has very very very very little to do with the finite size of its charge distribution. It has a lot to do with the electron(s) around it and its mass.
Question: How is the “size” of a proton defined exactly? I was under the impression that all elementary particles are point-size under the Standard Model. However, the proton is not an elementary particle, and is composed of three quarks. So am I correct in thinking that the “size” of a proton is related to the distance between the three quarks? Isn’t that a matter of QCD rather than QED?
@Yoni (Re: #31)
It is clear that only S states have non-zero density at the center-of-mass of the nucleus-electron pair(still within the nucleus), and, that within a given electronic level, density sufficiently near the nucleus decreases as angular momentum increases.
It is also interesting to ask at what distance r will a 2S state have the same probability of being within a sphere of radius r as a 2P state. That distance is 1 bohr radius, the most likely distance in the 1S state.
I’m just now getting around to reading this and thought it might be of some interest here.
http://tgd.wippiespace.com/public_html/paddark/shrinkingproton.pdf
From comment#10,
“It is a property of a fundamental(-ish) particle, but not one that enters into the fine-tuning arguments. Thus, uncertainty in the proton charge radius doesn’t affect the fine-tuning argument at all.”
Just a layman here but, could this have implications for the fine structure constant? I would have thought a smaller charge distribution field would affect the EM coupling strength??
Pears And Apples
Electrons-scattering and muon-Lamb-shift
“Size of a proton?”
Really small. But physicists can’t agree on one number.
http://www.sciencenews.org/view/generic/id/67759/title/Size_of_a_proton%3F_Really_small
In order to agree on a number, physicists should first agree on the proton’s “functional size” they wish to measure.
The obtained electrons-scattering and muon-Lamb-shift sizes may BOTH be right, reflecting two different “functional sizes” of the proton.
Dov Henis
(comments from 22nd century)
“Record number of photons lassoed into a quantum limbo” (Dec 25 2010 comment)
http://www.sciencenews.org/view/generic/id/59217/title/Record_number_of_photons_lassoed_into_a_quantum_limbo%C2%A0