Blog

Striving for Continuous Improvement: Why Module Evaluation Surveys are a Valuable Source of Data for Universities

Written by John Atherton.

How can universities create a culture of continuous improvement? One that informs immediate staff and student development and teaching and learning innovation, and also one that is consistent and can be benchmarked year-on-year.

This is an issue we explored in detail in The Student Voice: How can UK universities ensure that module evaluation feedback leads to continuous improvement across their institution?, our insight report published earlier this year. With universities investing time and money into module evaluation, the expectation is they will be able to better understand which approaches to teaching have the highest level of engagement – and plan for the future based on evidence.

The integrity of data, and having effective tools and practices for module performance management, is therefore absolutely critical. “An agenda for continuous improvement at institutional level is focused on enhancing the student experience rather than hitting metric targets, although data are vital in supporting this ambition; if the data are patchy, we are not going to get where we need to be,” said Professor Wyn Morgan, Vice-President for Education at the University of Sheffield. “For any teaching activity, we need to ensure we have evidence of the impact it has on students and their learning.”

At Bath Spa University, this evidence comes from the development of an academic reporting tool which encourages programme teams to embed a continual focus on enhancement. “This brings together a variety of data, including module evaluations, NSS, graduate outcomes, recruitment and retention information,” explained Dr Becky Schaaf, Vice-Provost for Student Experience. “To support this, we are strengthening our business intelligence tools and developing a clearer and simpler reporting system for staff. They will be able to look at a dashboard to see how their module and the wider course is performing against key indicators, including student recruitment, retention and employability. It is all part of a bigger understanding of the evolution of the subject.”

Others reported spin-off benefits from having stronger, more robust, module evaluation data. The University of Nottingham, for example, is exploring whether there is consistency in its evaluation data and how survey results relate to student marks. It is undertaking equality impact assessment to see if there is a difference in the feedback given to women and men, international and UK lecturers, and also junior and senior academics to assess any internal biases in the questions set and answers given.

Professor Sarah Speight, Associate Pro-Vice Chancellor for Teaching and Learning, added: “Through a baseline of average scores across the institution drawn from our surveys and other data, we can identify individual tutors who may be struggling. For example, they may be new lecturers who need more support, they may be teaching in their second language or delivering a traditionally difficult module. We can then put in place a programme of support for the individual academic, which will then be followed up to see if evaluation data shows an increase in student satisfaction. Equally, if an individual tutor is clear why they are doing so well, we will seek ways to share that practice more widely.”

However, challenges around fostering institutional improvement through data analytics remain, according to Professor Sharon Huttly, Pro Vice-Chancellor (Education) at Lancaster University. “Benchmarking in module evaluation is a particular challenge”, she admitted. “Internally, we can benchmark by comparing and contrasting module data, but I also encourage the longitudinal picture because it is difficult to rely on single-year data. We also need to triangulate and not see module evaluation surveys as a single source of data.”

In summary, and going back to our opening question, many universities did express an underlying commitment to creating a culture of continuous improvement, with an enhanced focused on data analytics, and with the objective of teaching and learning improvement and student and staff development. Yet it’s clear that traditional barriers to student engagement in surveys still need to be acknowledged and confronted.

For too long student evaluation data has been underutilised. Universities have tended to focus on improving the process, for example by automating rather than using the data for improvement. There has also been too much focus on the scores that come back from the data and whether an individual score is better or worse than the average. While this is helpful, it does not facilitate an understanding of the issues and trends with students. Thankfully the discussion is now becoming a more strategic one.

 

John Atherton is Higher Education Director (UK & Ireland) at Explorance


BlueCourse evaluationsHigher educationStudent insight solutions

Want to learn more about Blue?

Stay connected
with the latest products, services, and industry news.