Two pieces of US research have caught my eye over the past month, both of which support the observations I made in my last blog on how student evaluation and wider student engagement are generating more mainstream attention.
The first study, from the University of North Carolina, found that US universities are reluctant to reduce their reliance on student evaluation of Faculty for promotion and salary decisions, despite mounting evidence that they disadvantage women and ethnic minority lectures. It showed that white male lecturers tend to get higher ratings than female and non-white lecturers in end-of-course questionnaires.
The issue of potential bias was a theme that contributors touched on in our insight report, The Student Voice: How can UK universities ensure that module evaluation feedback leads to continuous improvement across their institution? At the time of interview, Professor Sarah Speight, Associate Pro-Vice Chancellor for Teaching and Learning at the University of Nottingham, revealed: “We are doing equality impact assessment – can we see a difference in the feedback given to women and men, international and UK lecturers, and junior and senior academics to see if there are any internal biases in the questions set and answers given?” Professor Speight also explained how the academic community use published module evaluation results to support applications for promotion.
Evaluation of individual teaching staff, as well as the module, is not uncommon in my own experience. Nor is the link between positive student feedback and pay and progression. But I would also suggest that student evaluation of performance is just one factor in such discussion, there is no “reliance” on surveys as such.
The second study, published in a Massachusetts Institute of Technology discussion paper, suggests that teaching via the flipped classroom method fails to boost student performance. The results also find that this approach – in which students are introduced to learning material before a taught session and then spend class time engaging in problem-solving and discussion – may also exacerbate achievement gaps between groups of learners.
Whilst I am not an expert in the flipped classroom, I do have an informed view that universities – through capturing the student voice more effectively – can ascertain more quickly the approaches to learning that do make a difference. It is also common sense to suggest that students who are engaged in lectures, seminars and workshops are less likely to drop-out of their course, and more likely to achieve good things (and are more likely to be satisfied, which is so important, of course, for NSS outcomes).
In our report, The Student Voice, there is a whole chapter devoted to Institutional Improvement and how can universities need to better understand what approaches to teaching have the highest level of engagement and plan for the future based on evidence. Data analytics play a big part, not least from module evaluation surveys.
These surveys are recognised by senior leaders as playing a strategically important role in the ‘student voice’, providing institutions with the opportunity to respond to any issues and concerns before it is too late. They also enable individuals, departments, faculties and universities as a whole to reflect on their teaching practice and the wider student experience.
John Atherton is Higher Education Director (UK & Ireland) at Explorance
Blue•Course evaluations•Educational experience•Student Journey Analytics•