There have been some amusing Twitter debates recently on the topic of ‘no hands up’ policies, and the contentious use of randomly asking questions to pupils. The issue of tutors assessing understanding of entire groups is not really addressed by either picking random students, or allowing students to elect to answer a question. We get a potentially non-representative snapshot, and by carefully selecting who answers, or what question gets asked, we can convince ourselves that we are doing a great job. Working in a University setting with large groups, I am focussed on assessing whether everyone in the group understands, who doesn’t, and reasons for lack of understanding.
So how can all students be encouraged to participate in Q&A sessions, whilst allowing the tutor to assess student understanding of key concepts by the whole cohort? One area that is being used particularly in Universities is Mobile technology. Apps such as Socrative allow the group to submit answers which can be displayed on screen, but require users’ own devices, which may be OK for Universities, but not really applicable to schools. Similarly Twitter can be used, where students give an answer in class and answers appear on screen. On the downside, the majority of replies are off-task, although once the novelty wears off, maybe this approach will have some merits. Similarly, ‘clicker’-type voting devices are an option, as they do not require the use of students’ own mobiles. Programming individual devices to individual students, if such information is needed can be a barrier, especially for large cohorts.
I frequently put a question on screen, and then ask groups of up to 200 undergraduate students the following questions (in this order, with typical responses noted):
Hands up everyone who thinks the answer is true (25%)
Now hands up if you think it is false (25%)
Hands up who doesn’t know (25%). And as I lose the will to live…
Hands up who doesn’t care (25%).
If that doesn’t work, simultaneously I ask for left hand for true, right hand for false. And so on. Getting large groups to answer questions so that I can gauge levels of understanding of key concepts is not easy, especially in my admittedly didactic teaching sessions.
The iCard voting system: For when Technology-Enhanced Learning seems like an un-necessary evil
Expanding on the concept of left-hand or right-hand up, is the simple use of coloured voting cards. Hold up a red or green sheet of A4 to test understanding of a key point. It allows the tutor to see who understands (or if we are being pedantic, those who think they know the answer), who doesn’t, and who is disengaged completely. However a 50:50 question is not very informative. This is where the iCard comes in. Four (or more) pieces of coloured card (A6 should suffice) liked by a treasury tag can be given out at the start of the session. Questions can be asked to the entire group, which can be a simple recall of a straightforward fact that is central to the understanding a concept, or a more testing question that requires a few minutes of working out.
Is it useful?
Teaching large groups of anything up to 200 renders ‘hand up…’ pretty useless. Even in a smaller group, getting everyone to consider the question, and seeing evidence of some effort by all students is near impossible. In contrast, the iCard-type approach does work. My initial use of this was in an end of semester informal test with 60 students. Questions were projected via Powerpoint with the question, plus 4 colour coded answers.
Initially, as anticipated, a minority did not engage well, but 90% were happy to answer all questions. To get the remaining 10% to engage, there has to be an element of bullying. These students soon found out that if they didn’t answer with the rest of the group, I would push them individually for an answer, and then inform the rest of the group whether the individual was right or wrong. These students soon started to answer with the rest of the group.
What did I learn from using the iCard?
1) Student misconceptions on ‘simple’ fundamental points. Some of the questions were deliberately simple, and I anticipated a >95% of respondents giving the correct answer. By quickly scanning the room for incorrect answers, I could identify who got the answer wrong, and crucially, what the misconception was. I could explain why the given answer(s) might be incorrect, without necessarily highlighting students with wrong answers, by encouraging students to keep their card ‘close to their chest’
2) Identifying weaker students. As the majority of students were answering correct for each individual question, I could focus my attention to the incorrect answers. As expected, some weaker students consistently answered incorrectly. However some students whom I had down as particularly strong students were exposed as having gaping holes in their knowledge, sometimes on fundamental points.
3) Identifying topics that were poorly understood. Two topics out of 11 were particularly poorly answered. I can now look at, and take action on a) how those topics were delivered, and/or b) whether there is some underpinning knowledge that is missing from earlier in the course.
4) Poorly worded questions can trip up students. All questions should have only one correct answer. However questions can be ‘read’ differently, and in two questions, an ‘alternate reading’ of the question would lead the student to answer incorrectly. By discussing why answers are wrong, the students could argue their case. As a result, I will re-writing a few questions before using them again.
5) Students’ responses can spark debate over contentious points. As noted above, students are happy to argue with me if they think that I am wrong. Although the voting cards do not allow students to express views, or give complex and well-articulated answers, the voting cards are an ideal way to initiate subsequent debate.
I must stress that this is not a new concept, and is certainly not my idea. They are so cheap and simple, yet seem to be effective, especially with carefully worded questions or tasks. This type of in class formative assessment seems to be used only sparsely used in Universities where large group teaching is common, and where if anything, mobile technology and clickers are being introduced more widely. With such diverse opinions on how tutors should assess student understanding during sessions before ‘moving on’, maybe it’s time to re-visit some old technology before blindly moving on to a technology-based approach.
1) They are very cheap and easy to make
2) They are applicable to situations where tutors are assessing a right/wrong answer/MCQ answer, or where specific/defined opinions of a group are being sought, and especially for large groups
3) Bureaucracy of obtaining anything that is remotely costly or technological can be a barrier to implementation.
4) Some tutors will always remain as technophobic as humanely possible, and a minority are particularly ‘risk averse’. Even technophiles have concerns that the time teaching students how to use any technology, and technology failure may detract from the learning.
On the negative side:
1) There is no permanent record of who voted for which answer, only the tutors judgement on who or what to follow up.
2) You will get it in the neck from advocates of Technology Enhanced Learning
In summary, all students get the same questions, and get the same treatment and there is less of a requirement to ‘pick on’ individual students. Please feel free to comment on potential uses, and importantly, limitations of use.
And here is me rambling on about it at a recent TeachMeet