Blog

Utilising Module Evaluation Surveys for Institutional Improvement

Written by Dr Tim Linsey.

Across the higher education sector, approaches to student engagement are multi-dimensional: partnership working, representation on committees/other formal groups, and activity where students work with staff to create learning and research materials (co-creation).

An example of how this close, creative relationship with students can work in practice is Kingston University’s Student Academic Development Research Associate Scheme (SADRAS). SADRAS involves students and staff working together to design and carry out small research projects specifically focused on enhancing the academic experience of under-represented groups of students at the University.

These projects sometimes evolve from other types of student engagement activity, such as staff/student consultative committees or course evaluation. The use of student feedback is part of an institution’s quality assurance and quality enhancement processes. How this feedback is used to support course teams through a process of continual improvement by delivering effective course design and enhancing students’ experience of learning, teaching and assessment is a key area of discussion.

Module Evaluation Questionnaires (MEQs) are an important component of an approach designed to be consistent, to ensure development and decision-making is guided by evidence, and that good practice is recognised and shared. We use 12 standard questions – including two text response questions – University-wide to ensure consistency for students across multiple modules and to spotlight change over time. Blue, a supporting resource from Explorance, is used as the MEQ environment.

Students have flexible access to the surveys and can complete them in their own time using their personal devices (47% of our 2018-19 surveys were completed on mobile devices). Within hours of the survey closing, reports summarising the feedback are automatically generated and published in our Virtual Learning Environment (VLE) with tailored versions of the report being available for both staff and students. The scheme is also supported by trained course representatives and an explanatory video has been created by our Students’ Union.

Timing is crucial to our process. MEQ outcome reports are published to students and staff while the module is still running which allows the outcomes to be considered in class. This discussion adds value and generates further insight. The automatic scheduling and publishing of surveys to students by email and through our VLE helps to facilitate this process.

We also recognise that making high-level decisions on the outcomes of a single, or a small number of MEQs would be problematic, particularly in the absence of ‘local’ module-level knowledge. We therefore ensure our decision-making is informed by multiple sources of evidence and metrics including MEQ outcomes over time, module information such as progression rates, Dr Tim Linsey and additional sources such as external examiner reports.

We also take care with how we present module data in summary reports. For instance, comparisons and rankings provided to senior University committees compare like for like modules (e.g. by level and Faculty) and include an additional indicator of statistical significance based on the response rate and sample size.

To ensure that student feedback is effectively embedded in our quality assurance and enhancement processes, the MEQ data – along with other institutional surveys, our level 5 University student survey and the National Student Survey (NSS) are also automatically published in University dashboards which are available to all staff. The MEQ dashboard allows members of staff to view the quantitative data at both module and aggregate levels and also look at comparisons with previous years’ data.

Additional dashboards summarise other such key module and course metrics as progression rates (particularly at the first attempt), completion rates and value added scores. This ready access to metrics and student feedback significantly supports the reflection, planning and decision-making carried out by module and course teams and at a strategic level within the University.

This process is further enhanced through the automatic pre-population of data in the academic monitoring and enhancement process plans completed at module and course level. These plans are automatically populated with MEQ quantitative responses and progression data, for example. The module leader completes their pre-populated plan by reflecting on the performance of the module and identifying actions for the next year which are informed by this pre-populated data and other sources of evidence.

Our course enhancement plan is also pre-populated with NSS data, Kingston Student Survey data and other metrics. Our course leaders then develop action plans citing the evidence used including the individual module plans. In this way we use student feedback and other key metrics to ultimately address student retention and to ensure all our students achieve their full potential.

Dr Tim Linsey is Head of Academic Systems & Evaluation in Kingston University’s Directorate for Student Achievement. Dr Linsey is speaking at the Universities UK conference on Improving Student Retention on 13th June:


BlueCourse evaluationsHigher educationLearning evaluationsStudent insight solutions

Want to learn more about Blue?

Stay connected
with the latest products, services, and industry news.