Somebody asked a question at the Physics Stack Exchange site about the speed of light and the definition of the meter that touches on an issue I think is interesting enough to expand on a little here.
The questioner notes that the speed of light is defined to be 299,792,458 m/s and the meter is defined to be the distance traveled by light in 1/299,792,458 seconds, and asks if that doesn’t seem a little circular. There are actually three relevant quantities here, though, the third being the second, which is defined as 9,192,631,770 oscillations of the light associated with the transition between the hyperfine ground states of cesium. As long as you have one of these quantities nailed down to a physical reference, you are free to define the other two in terms of that one.
This strikes a lot of people as odd. The image most people have of physical standards would have both the meter and the second tied to some sort of physical reference, with the speed of light determined in terms of those two standards. And, in fact, that’s how things used to be– if you look at the history of the definition of the meter, you see that it was tied to a physical standard until 1983. So why the change?
The reason for the change is basically that we can do a much better job of measuring time than position, thanks to spectroscopic techniques developed in the 1940’s. This wasn’t always the case– the second was originally defined in terms of astronomical observations, with the value being defined as 1/31,556,925.9747 of the duration of the year 1900. This end up being a little problematic, though, because the motion of astronomical objects changes measurably over time.
With the ultra-precise spectroscopic techniques developed by Rabi and Ramsey in the 1940’s and 50’s, it became possible to make clocks using atoms as the time reference. This offers a big advantage over astronomical motion, in that the quantum mechanical rules that determine the energy levels of an atom do not (as far as we can tell) change in time. Thus, every cesium atom in the universe has exactly the same energy level structure as every other cesium atom in the universe, making them ideal reference sources.
By the 1980’s when the definition of the meter was changed, atomic clocks were accurate to about one part in 1013. That’s vastly better than the then-current physical standard for the meter, namely “1,650,763.73 vacuum wavelengths of light resulting from unperturbed atomic energy level transition 2p10 5d5 of the krypton isotope having an atomic weight of 86.” At that point, our ability to measure time exceeded our ability to measure length by enough that it was worth changing standards to take advantage of the precision of atomic clocks. Since relativity tells us that all observers will measure exactly the same speed of light, that gives us a very nice way of connecting time and distance measurements. Thus, the speed of light was defined to have a fixed value, and the definition of length was tied to the definition of time.
Of course, atomic clocks have only gotten better since the 1980’s. The current best atomic clocks, based on laser-cooled atoms, are good to several parts in 1016, or, in the way such things are usually quoted, about one second in sixty million years. We can’t come close to measuring lengths with that sort of precision, except by converting them to time measurements.
The other obvious question to come from this subject is “Why are all these things defined with such screwy values? If the speed of light is going to be a defined quantity, why not make it exactly 3×108 m/s, and spare us having to type ‘299,792,458’ all the time?” The answer is, basically, inertia. The physical standards for length and time had been in use for so long, and used for so many precision measuring instruments that the expense of changing the base units would’ve been ridiculously huge. So, the definitions were chosen to be as close to the previously existing values as possible.
In an ideal world, it would’ve been better to define the speed of light as exactly 300,000,000 m/s, and the second as 10,000,000,000 oscillations of the light associated with the cesium ground state splitting, and end up with a meter that is about 9% longer than the one we currently use. In addition to making life simpler for physics students all over the world, it would be an elegant solution to the problem of the US using a different system of units than the rest of the world. If everybody had to change all their units over to a new system, it wouldn’t be such an imposition on us… Sadly, ideal worlds only exist in physics problems and economics textbooks, so we’re stuck with the current screwy definitions for the foreseeable future.
This is all well and good, but until we have a metric week (and month) I’m not going to be happy.
The other issue is that distance and time are relative, but the speed of light is constant.
The other issue is that distance and time are relative, but the speed of light is constant.
That doesn’t go away when you tie the definition of the meter to the speed of light, though. A moving observer will get a different value for the length of an object than an observer who is at rest relative to the object, even if both of them use the speed of light at the basis for their length measurements.
Just join the club of theoretical physicists and put c=h=G=k=…=1. Hahaha!
One other element to the measurement being better is politics â developed countries are not prone to adopting a standard if they are not able to produce or obtain a reliable local copy of it. Atomic frequency standards are commercially available and relatively cheap on the scale of national lab budgets. Really good ones are custom made, so they are more expensive and fewer countries pursue building them. A change to a better standard won’t occur if they are still research-grade and only one or two countries are able to build them, which accounts for much of the time between realizing that atomic time was better and finally adopting it as the standard.
I think the confusion may be why you need to measure only one quantity (either length or time), and use it to define the other two. In your units the speed of light is 299,792,458 m/s by definition, don’t you have to measure it? Is there a non-circular way to measure it?
The answer might clarify why sticking with c=1 gives more transparent picture (free of human conventions), and why the expression “varying speed of light” is problematic.
The answer might clarify why sticking with c=1 gives more transparent picture (free of human conventions)
Which is why we should measure distance in feet, and time in nanoseconds…
The issue is not getting numbers of order one, the issue is not being bogged down with pseudo-questions, like why is the speed of light what it is. The world in which the speed of light is doubled is exactly identical to our world.
Or, as another way to emphasize it is all human convention, make some absurd choice. Got it. Not a bad idea…
In your units the speed of light is 299,792,458 m/s by definition, don’t you have to measure it? Is there a non-circular way to measure it?
No, it’s not circular. You have defined standards for a velocity and a time. Multiply those together, choose a convenient* scaling factor, and you have a length. It’s true that you have to, in effect, measure that length, but that is the only unknown in this problem. The reason you do it this way, rather than using defined standards for length and time and measure the speed of light in terms of those standards, is because the uncertainty in your length standard dominates the uncertainty in what is known to be a fundamental constant of the universe. There are certain high precision measurements where the experimentalists need to have the sort of precision that theorists automatically get by setting c = 1, etc.
*Where convenient is defined in terms of not drastically altering measurements made with the previous standard.
Once you have two independent standards to measuring something (say inches and centimetres), you have to know the conversion factor, sometimes to great accuracy, for practical purposes. Seconds and meters are also two conventions for the same thing, and setting c=1 just means you use consistent conventions instead of confusing yourself by switching back and forth. It is also less transparent that the value of c is a statement about human history (whereas the number 2.54 converting between inches and centimetres is obviously not a statement about nature), so you avoid that common confusion as well.
I think people sort of got it backwards. If we take the second to be defined in terms of Cesium, with no use of the other two, and c to be constant, lets call it 1. So far we have no circularity. When we define meter in terms of the other two, we are fine.
The problem is one of units.
Given these definitions, the second is fine, but speed needs to be defined in v (veloceters) where c = 1Tv or something. Then distance is measured in v s, and so one meter is .whatever vs. This is all non-circular.
Now, that we have a non-circular definition of meter, we can get rid of the temporary veloceter’s concept by defining one veloceter as however many m/s and re-calculate c in terms of m/s.
My point, which I seem to have lost, is that c is not DEFINED as 299….. ms. it’s defined as the speed of light in a vacuum. we then define meter based on this known to be constant speed. The point of the veloceters is to have a unit for c until we’ve defined meter…
I like the original article, and Brian at #13 says pretty nicely why it’s not circular.
To add one point to the original article, it’s certainly true that “we can do a much better job of measuring time than position”, but that alone wouldn’t be enough to justify redefining the meter in terms of the speed of light and time.
You also want the technical details of measuring distances with light + time to work well. Fortunately, it turns out that that you can do that extremely well: laser interferometers are one of the best ways to measure distances, so all you need to do is relate the frequency of your laser to the frequency of the cesium clock. The latter task isn’t so easy, but folks are pretty good at it (and thanks to frequency comb lasers, they’ve recently become amazingly better at it).
The technical details are important, and explain, ferinstance, why we’re still stuck with a mass artifact (for the time being) for the kilogram standard. Ion cyclotrons can compare the masses of molecular ions MUCH more accurately than the mass standard, which would incline one to redefine mass in terms of some atom, but there are big technical problems relating the mass of an atom to the mass of a baseball.
“Just join the club of theoretical physicists and put c=h=G=k=…=1.”
It’s just that all those invisible ones with units confuse the hell out of us experimentalists when we try some dimension analysis!
There are some tricky semantic issues here. I think Brian got it basically right in post 13.
There is just one definition involved, not two: the meter is defined as the distance that light travels in that tiny prescribed time interval. Then the speed of light follows; it’s not a definition in its own right.
This is all well and good, but until we have a metric week (and month) I’m not going to be happy.
Apparently revolutionary France tried for a metric week, but for some reason nobody was very happy about having one day off in 10 rather than one in 7, so it never caught on.
Problems with a metric week and days off? Easy. Just work five days and take two days off. You don’t have to take your days off on the weekend.