Comments on Deslauriers “Improved learning in a large-enrollment physics class”.

The role of traditional teaching is taking a beating from some/many quarters in HE. Often the evidence against traditional teaching is from educational research which by the standards of a typical scientist, is rather questionable. This is a pity as the very positive messages about non-traditional methods can become lost. This gives entirely traditional teachers the opportunity to disregard the findings as nonsense, when clearly there is benefit to using methods other than a straight 2 hour Powerpoint monologue. I have discussed the Freeman paper on active/passive learning in STEM, which is very influential, but has its issues as discussed previously here. Another highly influential paper on active/passive learning was published in Science (Deslauriers et al 2011, Improved Learning in a Large-Enrollment Physics Class, Science, 332, 862-864). This is one of the most influential journals in Science, so only studies that are highly relevant and conducted using the finest experimental design are even considered for peer-review, yet alone published. So therefore, this study probably requires more attention that your run-of-the-mill education journal.

The study’s main outcome is that deliberate practice concept resulted in better scores than traditional teaching in a single group of students after 3 hours of teaching, where deliberate practice delivered by a young inexperienced tutor, whereas traditional teaching delivered by an experienced academic. That’s it. One group of students. 3 hours tuition. Two separate tutors doing two separate things. One MCQ test. No controls. Blimey! I have seem more comprehensive studies proposed for a MEd final project!

So where do we start.

The whole study is based on 3 hours of tuition by a highly rated and experience academic instructor with above average student feedback scores (so that means good at teaching, right?) giving a traditional lecture vs. 3 hours of an inexperienced post-doc. Now the results show that the post-doc wins hands-down, but the supplementary data explains that one of them is highly animated and the other not so. Why have two variables (instructor and method of tuition) especially since the study is not repeated? This is essentially an n=1 experiment with no controls.

Why not have the experience academic deliver both methods, in part to control against the two-variable problem, and in part to show that the deliberate practice concept is transferable even to experienced traditional academics, and not just young enthusiastic post-docs? Even better, why not get the new post-doc to also deliver both methods, either in the same academic year or the following intake of students? You then have something that resembles a controlled experiment.

The students were informed that something new would be done and explained why. Since the traditional group would probably get much of the same, the Hawthorn effect needs to be more adequately controlled for. The experimental group had higher attendance than the control traditional group, which is commended. However is this the sole reason that scores were higher? However they test group did engade better, in favour of the test group (Hawthorn effect again?) There is no mention of correlation of attendance with outcomes in either group, so was it the increase in attendance from 45% to 85% that increased test scores? If that factor (attendance) could be assessed in isolation, we may start to get causative effects.

The cohort of 850 were split into two groups of approx. 270. Hmm, what about the others? Might have acted as a nice control if ‘no intervention’ used, if only for the Hawthorn effect. Seems a bit odd to me.

I really disagree with the use of “amounts of learning” being used in an absolutely quantitative manner. For example “… and more than twice the learning in the section taught using research-based instruction”. Twice the learning? The assessment is by MCQ, so twice the score is twice the learning, yes? Well no, given that some questions are 50:50, some are 1 in 5, some are rather simple and even I could answer, some look really quite tricky. The results may be far more, or less impressive than the 2-fold that the authors state.

The study propagates the false dichotomy myth that teaching has to be either traditional or progressive. The Freeman paper (as discussed previously in my blog) included any studies where at least 10% of the session was ‘active learning’, so by definition 90% was traditional passive lecture. I doubt whether 100% active or 100% passive are best to allow in-depth assimilation of breadth of content to the required depth. All to often I hear of ‘ditching content’ to facilitate in-depth active learning, using the Freeman paper as evidence. Some of this content ditching may be valid in favour of some active approaches, but it also gives the impression of an anti-knowledge agenda (I’ll save this for another blog).

What really disappoints me is that this type of study could have been conducted over a couple of academic years to obtain controlled data to remove all the competing variables. I still don’t think it would be anywhere near worthy of publication in Science, even if all of the findings were replicated, but at least it would be done properly. We could then make a sensible descision on what teaching methods work best for our students. As it is, this paper allows the more traditional teaching arm to disreagrd the evidence as rubbish or unreliable on the basis of n=1 and no controls (and hence, no causitive factor identified).

I am a little concerned at the school-boy error of describing what is clearly non-parametric data using means and (presumably) standard deviation. No attempt at any statistics was presented.

Lets save the best ’til last. The authors are the tutors. OK, not ideal as preconceived bias could creep in, but this is Science journal. That would not happen and would be adequately controlled for in some way. Or maybe not. The authors state “…but we believe that educational benefit does not come primarily from any particular practice but rather than from the integration into the overall deliberate practice framework”. This might be OK in a final conclusion, after seeing that data, but NOT in the setion describing the design goal. They are effectively saying “We believe our hypothesis to be true, and we will be the tutors to gather the appropriate evidence that will prove that we are correct”. This is not how to do science, and I am amazed that the journal Science let this part through. It is the single most disturbing part of the who paper for me (lack of controls almost as bad). I have previously blogged about the white swan hypothesis. I have also written about ideology and belief, and how it should have no place in science. This paper ticks both boxes.

OK, that’s enough criticism for now, as there is a decent probability that the test group genuinely learned more, and we could distil some really good practice from this paper. I do like the subtle hint of Sceptic-Proponent collaboration in here, but that would only be valid if the traditional academic’s view was that traditional was best, and that academic had been converted by the findings.

See my initial thoughts below from when I first read the paper. Let me know if I’ve got the wrong end of the stick on this one.


About TheOtherDrX

Senior Lecturer in Biosciences. MSc Biosciences course leader and lecturer on topics such as Cell Biology, Moleular Pathology and Genetics. I manage a research team of PhD students and post-doctoral scientists working on novel anti-tumour drug combinations, nanotech-based delivery of anti-tumour agents, and artificial scaffolds for 3D cell culture studies as a replacement for animal-based studies. I also do a bit of STEM public engagement work with my Geiger counter.
This entry was posted in Student engagement, Stuff about research and tagged , , , , , , , . Bookmark the permalink.

7 Responses to Comments on Deslauriers “Improved learning in a large-enrollment physics class”.

  1. Nick Greeves says:

    Assert, assert, conclude with support for assertion. Publish in Science (!), Job done?

  2. Pingback: Should we adopt more active learning at the expense of cutting the STEM curriculum? | TheOtherDrX's Higher Education blog

  3. Aaron F. says:

    What do you mean when you say there were no controls? If I’m reading the paper correctly, this experiment followed the study design of a typical Phase III clinical trial: patients are split into two groups, with one group receiving an experimental treatment while the other group continues to receive the current standard of care. It’s my understanding that, in this kind of trial, the usual care group plays the role of the control group. Where am I mixed up?

    • TheOtherDrX says:

      There are two variables. The experienced/inexperienced but enthusiastic lecturer and the pedagogy employed (plus other confounders such as time of lecture if my memory is correct). Need to have a single variable to be sure of the caise of the effect. No clinical trial has two variables between groups. Oh and the Hawthorn effext is probably at play so a more extended study where any new pedagogy is not so ‘new’.

    • TheOtherDrX says:

      See my comments scribbled on the script at the bottom of the article. Those on RHS say ‘why not get each instructor to repeat the session but reverse there teaching style’, or words to that effect. That would adequately control for differential instructor input but obviously might reverse the study result! If they had done this I might not have been quite so critical.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s