Blog

10 Module Evaluation Dilemmas: The Key Decisions Facing Universities in 2020

Written by John Atherton, General Manager, Europe, Explorance.

2019 has been incredibly successful for Explorance in terms of our growth in the number and variety of universities we are working with in the UK. We have undertaken a range of events – in person and online – as we seek to share our knowledge, and impact on, the way institutions approach module evaluation. As the year draws to a close, here are my reflections on 10 dilemmas facing universities in 2020.

  1. Data preparation: where is it, and who owns it?

    The biggest challenge in the centralised process of module evaluation is getting the right data from university systems. It is even more difficult when you are evaluating individual staff through module evaluation, which is growing – not decreasing – practice in the UK. A key question is ownership of data, and where it sits. This can be a bottleneck to getting the right information.

  2. Timing: mid-module, end of module, or continuous?

    In recent years, the trend has been to bring module evaluation into the teaching period which enables institutions to turn feedback around before the module ends. However, mid-module evaluation, before assessment and feedback, is not the perfect position either. Timing remains a huge point of deliberation for individual universities.

  3. Level: module or programme, and frequency?

    There is a move towards programme level module evaluation, and we are currently working with two institutions around this shift, which sees one survey per programme at the end of term. There is also a related question on frequency: should every module of every term be surveyed? Changing the cycle can bring down the number of surveys, but there is a risk of students giving no evaluation or lots of evaluation.

  4. Communication: what influences open rates?

    With online surveys there is a reliance on technology to capture response rates. However, some universities, Virginia Commonwealth University and the University of Toronto, for example, have undertaken tests on email subject lines and how students scan and read emails. Getting the communication right, and including calls to action, improves open rates.

  5. Confidentiality: who sees what?

    There is a lot of debate about whether making module evaluation responses anonymous has an impact on response rates. It does, of course, whether you are running surveys face-to-face or online. Universities may, however, also wish to analyse the data back from particular demographics which does not breach anonymity.

  6. Questions: NSS/Likert v open?

    Most universities start with National Student Survey (NSS) questions for their module evaluation surveys, but it is common for them to allow personalisation at module level. Typically the module leader will add questions from a question bank, or their own. At Liverpool John Moores University, for example, 40% use personalisation in implementation of surveys. Open questions also provide valuable insight.

  7. Mobile: access and accessibility?

    We know that in-class surveys are stronger in terms of uptake and quality of feedback. We also know that the ability to use mobile is a key factor. Universities which introduce a ‘My Surveys’ app, or similar, to allow immediate access tend to benefit from greater engagement.

  8. Preparing students: do they know how to give constructive feedback?

    McGill University has developed a really strong approach to preparing students to give feedback: when, how and how often. It makes visible what is happening and how it uses feedback in a systematic way to demonstrate the student voice. Incentivising students to take part tends to be short-lived, and closing the loop and demonstrating transparency is more effective.

  9. Reporting: different levels, different needs?

    At a top level, module evaluation tends to link to a range of external surveys such as NSS, Postgraduate Taught Experience Survey, Postgraduate Research Experience Survey and UK Engagement Survey. Previously there were lots of different approaches, now it is more consistent. However, the HE sector does not do much qualitative analysis. It is sitting on huge data and there is an opportunity to do more reporting.

  10. Module evaluation: why should we do it?

    Recognition of ‘why’ module evaluation is important for institutional improvement – with links to NSS and TEF – is now widespread. Lots of factors must be considered for understanding teaching and learning performance, not just one, but module evaluation is now seen as core by most. Practice is mixed, but the will is there.

John Atherton is the General Manage, Europe at Explorance


BlueCourse evaluationsHigher educationStudent insight solutions

Want to learn more about Blue?

Stay connected
with the latest products, services, and industry news.