Active Learning Experiment: Nearly the End

As noted in previous posts, I’ve been trying something radically different with this term’s classes, working to minimize the time I spend lecturing, and replace it with in-class discussion and “clicker questions.” I’m typing this while proctoring the final exam for the second of the two classes I’m teaching, so it’s not exactly the end, but nearly everything is in but for the student evaluations, so here are some semi-final thoughts on this experiment:

— On the whole, I think it went reasonably well, though things definitely flagged toward the end, particularly in the regular mechanics class. Once we got past the Momentum Principle (the Matter and Interactions term for Newton’s Laws) and started on energy, it got much, much harder to get anybody to talk. I think this was a combination of three things: one, that I re-shuffled the discussion groups at around that time, and it took a little while to re-establish chemistry in the new groups; two, that the material was more abstract, and thus harder for them to reason about effectively; and three, general fatigue as the term wore on.

There’s obviously nothing to be done about point three (which is particularly bad in our crazy trimester calendar– in a longer term, everything starts to drag in the middle, but you usually get a break 2/3rds of the way through, and things bounce back a bit. We hit the low point, and hen just stop). I’m not sure whether shuffling groups was a mistake or not– a couple of them definitely were falling into a rut, but I’m not sure they did any better after the reorganization (which I did based on the scores on the first exam). Point two is the one that seems to be addressable, but it’s hard to come up with better questions.

— Of course, the really important question here is not so much how the dynamics felt, but whether the class worked. The evidence on this is kind of mixed.

The standard way of evaluating these sorts of things is via test scores, either on in-class exams and assignments, or on conceptual tests. As far as the exams and assignments go, I don’t see any significant difference between the current classes and past classes taught in a more traditional manner, but that can be deceptive, because there’s a natural tendency to tailor the exams to what’s been covered in class. So, you know, not much to go on there.

In the conceptual test area, the evidence is decidedly mixed. The test we use is the Force Concept Inventory (for a variety of reasons), and the usual measure people use for assessing a course is a “normalized gain” score: you give the test once at the beginning of the class, and again at the end, and look at the change in a student’s score divided by the possible increase. This is to account for the fact that a student who initially did very well on the test can only increase their raw score by a small amount, compared to a student who scored very low. A student getting 10/30 on the pre-test and 20/30 on the post-test would have a normalized gain of 0.5 (10 points increase out of a possible 20), as would a student going from 20/30 to 25/30 (increase of 5 out of a possible 10).

The rough figure for normalized FCI gain expected from a traditional lecture course is around 20%. That is, a pre-test score of 10/30 would increase to 14/30. “Active learning” methods have been shown to produce increases of more 40+% on a fairly consistent basis in trials at many different institutions.

So, how did my classes do? Well, one of the two had a median normalized gain of 0.5, while the other was… 0.2. If you prefer means to medians, they were 0.46 and -0.09, though that last figure is a statistical fluke, I think, dominated by one guy who somehow got 29/30 on the pre-test but 25/30 on the post-test.

So, why the big difference? Two things:

1) Different Curricula. The course with the large gain was the first term of our Integrated Math and Physics sequence, which covers the first half of introductory classical mechanics along with the first course in the calculus sequence, team-taught by one physicist and one mathematician. The physics material stops right at the end of the section on forces, and before we start energy. The regular course covers that material in the first 5-6 weeks, then has an additional 4-5 weeks on energy and angular momentum.

The Force Concept Inventory, as you might guess from the name, deals only with forces. As a result, the IMP class was, in a certain sense, “teaching to the test.” The students in the regular intro course got that same material, but had 4-5 weeks of other stuff after that to confuse them/ give them time to forget. For that reason, you might naturally expect the IMP class to do better on the FCI.

2) Different Populations. The IMP class is all first-year students with projected majors in engineering (some math and science, but mostly engineering), selected on the basis of math placement test score. The regular intro course is primarily upperclass students, and is an odd mix of chem and bio majors taking Physics to fulfill a requirement for their home department, and sophomore engineering majors whose math preparation was shaky enough that they weren’t able to take Physics in the winter of their first year, as is the usual practice.

Not only that, but there’s also the fraternity issue. Students interested in joining a fraternity or sorority rush during the fall of their sophomore year, which fell in the middle of this class. I know for a fact that this had an impact on the scores– the one 29-to-25 student dominated the average gain, but there were two other negative gains, one of which was absolutely frat-related (the student in question told me as much).

Now, that’s not the whole story, because not all of the students were pledging, etc., but it certainly muddies the waters. A better comparison might be to look at the overall distribution of scores in the regular class compared to the overall distribution for a more traditional class format. I don’t have pre-tes/post-test scores for another off-sequence intro class, so it’s hard to do a direct comparison, but comparing to last winter’s on-sequence majors course doesn’t show much of a difference between the traditional lecture and this term’s experiment with active learning.

So, all in all, a very mixed bag, and it’s hard to tell what to make of it all. The student course surveys, when I get them, will be another useful bit of information, but I’m not sure how much weight to really put on those, either.

Subjectively, though, I thought this was a much more interesting way to aproach the class, and having gone through it once, I have a better idea of how things should work, and how to tweak it to make everything run better (I think). So, all other things being equal, I think I’d probably do it again this way next year. But we’ll see when all the final numbers come in.

4 thoughts on “Active Learning Experiment: Nearly the End

  1. I’ve always thought that the huge about of lecture students must sit through is not so productive.

    Since we are biologically wired for motion students find mechanics easier to visualize.

  2. Chad, I’ve been really interested in this active learning experiment and have enjoyed following along. The one thing that stood out to me in this post was the student whose grade fell during pledging. As a former member of Greek life at Union, it disappoints me to read that. I know my GPA increased during pledging, and a lot of the fraternities do stress and actively make sure their pledges stay on top of their grades. Did you have a lot of student athletes in your classes? I know the fall is one of the busiest seasons and it’d be interesting to see how that was handled by the students. I’ll be looking forward to reading the follow up post with the final results.

  3. You have given me great links to terrific blogs (Booman for one) so would you check out “A Don’s Life” by Mary Beard. The one you might enjoy about teaching is 10/19/11. Not Mark Hopkins, that’s for sure. And you can show pictures of your kids any day of the week–or twice a week–because they are adorable.

Comments are closed.