A frequent complaint from students regarding exams is the lack of opportunities to learn from their exams. As such, this has prompted education commentators to question their use, over other assessments where students can learn from the assessment and gain useful feedback from the assessed work. Well, I didn’t learn to drive on my driving test, but I admit, that is a lazy and poor answer. Aside from the weaknesses of exams (which Phil Race has covered very nicely recently) they have a prominent position in STEM subjects at least, a position that will not change as long as accreditation bodies promote their use. Furthermore, exams appear on our Key Information Sets (KIS) and their presence is deemed good. So aside from their perceived weaknesses, let’s make the most of them.
Feedback from every exam that I have ever taken has been in the form of a number or a letter. Actually not every exam, as I received some excellent feedback from my driving test examiner, but a part from that, nothing. I’ve never seen a script, and don’t know where strengths and weaknesses are. Why is this? We give copious amounts of feedback on other assignments, so why not here where students probably make the same repeated mistakes time and time again? It’s largely down to the practicalities of giving it back, and dealing with the fallout of contested marking. GCSE and A-levels are tricky as performed by outside exam boards, but what about in Universities? I set the papers, I mark the papers, and the marked scripts sit under my desk from one year to the next, when they are disposed of. I have no excuse for not using them for educational gain.
I recently did a short trial of giving of exam feedback to students. My initial approach was a single sheet outlining the key points on my mark scheme that were met to basic pass, merit or distinction, and outlined areas that were missed completely, with room for added extras that were relevant but not on a prescribe scheme. A simple marking matrix really, which students preferred to just receiving a grade, and for the first time they received a question-by-question marks breakdown, but moaned that they would like to see the scripts. I’ll probably do this again, but it does take significantly more time than I am allocated for exam marking. Another colleague showed students their scripts on a different module, and they were ‘happier’. However most students had effectively left the building for that academic year, or even for good, so this is still not logistically ideal.
So, why not make use of the box of scripts under the desk from last year? I frequently get asked “what does it take to pass, a 2:1 or a first in this module?” At the last taught session of the module, I get out last year’s attempts, and after anonymising them, I hand them out. They are sometimes astounded that we really do give 100% for essay answers, but are also shocked by the sheer quality and thought put into the work. They often comment on the poor quality of the 40% scripts, but I make it clear that many in the room will probably produce work of equal or lower quality. I ask them what they could produce now on the same questions. It is hoped that the real hard work of exam revision starts at this point.
So how do students gain from this? Firstly, students realise that they are not going to get a good mark in my modules by regurgitation of my lecture notes, especially when answers are out of context to the question. Secondly, at 2nd year and beyond, marks are not given over first class level without clear evidence of further reading, or independent thought that can be clearly identified on the script. Students see where last year’s cohort did this and were rewarded. Thirdly, students see how the marking scheme is non-linear, and that it is much easier to get the first 40 marks, as defined by the minimum pass level descriptor, than the next 20 marks, which require much more context and argument. It is not just about writing 100 facts and counting up 100 ticks for full marks. Finally, students see how answers can become very good answers if clearly put into context. A well annotated diagram linked to explanatory text, for example, can quickly demonstrate understanding of a complex concept. Seeing evidence of how assessors (well, how I assess them on their module) award marks, and what for, may account for the unusual marks profiles in most of my modules. Many modules in my department have coursework marks that are 10-20% higher than exam marks, and good coursework marks can easily compensate for a failed exam under our current academic awards framework. Since employing the strategy of allowing students to see last years’ scripts, exam marks are catching up with coursework marks, and for the first time I had exam marks exceeding coursework marks in 2 of my modules.
One final point: If only using last year’s scripts, you may ask “you are not letting students learn from their mistakes. Isn’t that central to giving good feedback?”. I disagree, to a point. Students show clear evidence of being able to learn from other people’s mistakes, not just their own. Maybe they will learn better when it is someone else’s mistake.
Mr Schadenfreude cartoon via