Blog

Can Universities do More with the Data they Receive from Module Surveys?

Written by John Atherton, Higher Education Director (UK & Ireland).

One of the most common criticisms aimed at universities by students is that they do not act on the feedback received through module evaluation surveys.

While this is undoubtedly a frustration and an issue that needs to be addressed – not least because surveys traditionally take place at a time when modules are nearing completion, therefore impact on that cohort of students is practically impossible – there is a bigger picture.

The real challenge for most universities in the UK and beyond is developing a wider system which allows them to gather insight on students’ learning experiences and then use this for quality assurance and quality enhancement purposes.

Our research report, The Student Voice: How can UK universities ensure that module evaluation feedback leads to continuous improvement across their institution? reveals that while some institutions are making advances in this area; others are at the start of their journey and restricted by the absence of consistent, institutional approaches to module evaluation.

“Academic schools know that this is part of building trust with students and often evaluation is more important for how it is used than what it says,” explained Professor Sarah Speight, Associate Pro-Vice-Chancellor for Teaching and Learning at the University of Nottingham. “We know the data is being used and acted upon: we know there are action plans for how the results are being used, and we have an annual monitoring process and review of the schools every three years that enables conversations about the use of our surveys. Importantly students are aware of how the data is being used.”

Other senior leaders highlighted the need to incorporate and directly link module evaluation data into their wider institutional planning. Professor Wyn Morgan, Vice-President for Education at the University of Sheffield, said: “Module evaluation is not done in pure isolation. We have an annual reflection process, analysing what students are saying, how we respond to it, and the impact of any changes that have come about from responding to prior feedback.”

Coventry University’s Deputy Vice-Chancellor (Student Experience) Ian Dunn added: “We act on the data at multiple levels: Faculty, School and course, but also institutionally through trends on teaching and learning. Course reports are built around module evaluation surveys and are also mapped into annual reporting.”

However, this agenda is still a challenge for most institutions. “We are striving for a more consistent approach to module evaluation and benchmarking across the University,” said Professor Karl Leydecker, who at the time of the interview was Vice-Principal (Learning and Teaching) at the University of Dundee. “We also know there are opportunities to generate live feedback and lighter touch mid-module reviews along with end-of-module surveys which the sector has traditionally relied on, moving towards feedback that can make a difference now rather than for the next cohort of students.”

Professor Sharon Huttly, Pro Vice-Chancellor (Education) at Lancaster University, explained: “Students are undertaking end-of-module evaluations when they have not necessarily completed all their assessments, or have the scope to compare one module to another, which is one reason why we are refreshing our programme evaluation. There are also challenges around evaluating and making changes while a module is running – you can do this to some extent but not root-and-branch changes to delivery, which have to wait until it next runs.”

Dr Becky Schaaf, Vice-Provost for Student Experience at Bath Spa University, added: “We find that mid-module evaluation can be more useful than end-of-module evaluation as it enables within-module changes to be made if necessary. We also know that students perceive that providing mid-module feedback might affect their marks. The problem with a flexible approach is the inability to generate institutional reporting mechanisms – there is no oversight or opportunity for comparison – therefore, we need a central tool delivered consistently.”

Universities have an opportunity to utilise the data received from module evaluation surveys to understand the issues and trends amongst their students, the student population as a whole but also specific demographics, and use the data to analyse the wider student experience.

The variety in approach across the sector stems from how data is gathered and reviewed, and the way the evidence is used at a local or institutional level. There is clearly more that can be done with the data universities have at their disposal.

What is encouraging, however, is that the value of formative as well as summative feedback is generally recognised and being actioned in places.


BlogBlueCourse evaluationsHigher educationStudent insight solutions

Automate your Course evaluations with best-in-class software.

Stay connected
with the latest products, services, and industry news.