Projectile Motion, Uncertainty, and a Question of Ethics

We no longer do what is possibly my favorite lab in the intro mechanics class. We’ve switched to the Matter and Interactions curriculum, and thus no longer spend a bunch of time on projectile motion, meaning there’s no longer room for the “target shooting” lab.

It’s called that because the culmination of the lab used to be firing small plastic balls across the room and predicting where they would land. In order to make the prediction, of course, you need to know the velocity of the balls leaving the launcher, and making that measurement was the real meat of the lab. The way I used to do it, it also included a nice introduction to error and uncertainty, particularly the difference between random and systematic errors.

I’m sorry to see this one go (I don’t think there’s any way short of a calendar change to fit it into the new curriculum), and I’ve toyed with the idea of trying to write it up for some pedagogical journal or another. I’m not sure that would be permissible, though, given the way the thing works.

The whole thing really came from pasting together two previously existing versions of a projectile motion lab. One colleague used to have students measure the velocity of a ball leaving a PASCO launcher by measuring the maximum height reached by the ball. Another used two different timing methods– students measured the time the ball spent in the air with a stopwatch, and then did essentially the same thing with a high-speed video camera. What I did was to combine the three methods into one lab.

This works really nicely as a way to get at the different kinds of error and uncertainty in a lab. The maximum-height method turns out to be very good– the projectile launchers fire the ball a bit more than two meters straight up, and students standing on the table can measure the height to within a centimeter or so. The systematic errors in this case are pretty limited– basically, the only thing that comes in is a parallax effect for those students who are too short to align their eyes with the maximum height. Averaging together ten or so measurements gives a statistical uncertainty of less than a percent.

The stopwatch method, on the other hand, is plagued with systematic errors. Individual students will be fairly consistent with their times, but the time measured is highly dependent on individual reaction time and how the students choose to do the timing. Some students will be systematically fast, starting a bit late and stopping a bit early, while others will be systematically slow, starting early and ending late. Averaging together ten times gives a statistical uncertainty that isn’t too much worse than the maximum-height measurement, but the two methods generally don’t agree within the statistical uncertainty.

The video-camera method ends up agreeing very well with the maximum-height measurement, with similar uncertainty (the camera we used typically recorded 250 frames per second). The velocities determined from either of those measurements work well for predicting the range when the balls are launched at some angle, while the stopwatch velocities are much less successful.

Putting all of these together works in a few different ways. The difference between stopwatch and maximum-height methods clearly shows the difference between random and systematic errors, particularly when they see a full class worth of measurements, with some students getting a velocity that’s too big and others too small, while the maximum-height and video camera methods always agree within uncertainty. Calculating the velocity gives them some practice with the kinematic equations, in two different forms (the video camera and stopwatch methods use the same calculation). The multiple measurements involved allow you to introduce the idea of standard deviation as a way of quantifying the uncertainty in a data set, and the calculation of the velocity uncertainty forces them to deal with propagating uncertainty.

It’s a nice lab, or at least, I think it’s a nice lab. I’m a little disappointed to see it drop out of use, and as such, I’m half tempted to use some of my infinite free time to write it up for The Physics Teacher, or some such (assuming that somebody else hasn’t beaten me to it, at least).

I’m not sure that would be entirely kosher, though, as the only way to really show how the whole thing works would be to include representative data sets from a couple of classes, to show how the numbers turn out. I have the data (at least, I’m pretty sure I do), because I used to have the students type their results into Excel so I could show them a scatter plot comparing the three methods, but I’m not sure I’d be able to use it. A comment made some time back by a colleague in psychology seemed to suggest that I wouldn’t be able to use data from students in my classes without first having cleared the use of those data. Which I didn’t, because I was just using the numbers for making a point during class. And, of course, since we’re no longer doing that lab, there’s no way to get new numbers…

I could be wrong, though, so I’ll throw this out there for people who know more about these sorts of ethical questions: would that sort of thing be a potential problem? It seems awfully silly, but it’s silly in the manner characteristic of academic bureaucracies, so I can easily believe that it would be forbidden by some rule or another, and it wouldn’t be worth the headache.

(Then, of course, there’s the fact that I’ve just described the whole concept on the blog, which I suppose could be seen as just as effective a publication channel, though without a bibliographic entry to put on my annual evaluation form…)