At Liverpool John Moores University (LJMU), module evaluation is a key survey for quality assurance and enhancement. This is also one of the most challenging surveys due to its scale and complexity of implementation: thousands of modules have to be evaluated, including those with non-standard delivery times. Students evaluate several modules simultaneously and multiple use of a single instrument across numerous modules often leads to ‘survey fatigue’.
LJMU has a long history of centralised module evaluation. The first university-wide module evaluation survey was introduced here in 1999, initially as a paper-based questionnaire, and from 2002 it was administered in electronic form. Anecdotal evidence suggests that the University was one of the first institutions in the UK to introduce a standardised questionnaire for module evaluation. It became evident, however, that the central institutional survey was perceived by academics as a top-down, management-driven approach that did not necessarily meet the needs of specific subject areas and staff’s own needs as reflective practitioners. Low sense of ownership was one of the main contributory factors to limited staff engagement and, as a result, low student engagement with the survey.
For a long time leaders of multi-programme modules wanted to be able to see how students from different programmes responded to the module, as combined scores and comments made it difficult to address programme specific feedback. Furthermore, modules with non-standard delivery patterns (e.g. those that run over summer or with multiple starting times) were difficult to capture. With these and other requirements in mind, LJMU sought a partner that could provide a versatile platform and address institutional needs going forward.
Following an institutional pilot of two survey platforms in 2014-15 and an extensive evaluation of both, we decided to adopt Blue as our institutional platform for module evaluation. Blue achieves the right balance between central administrative control and delegated responsibility to devise module-specific questions, delivering both institutional oversight and academic ownership. For staff, it provides the opportunity to reflect on their teaching practice based on more specific feedback; and for students, it leads to a more tailored learning experience.
Blue’s flexible interface and integration with the Virtual Learning Environment (VLE) – initially Blackboard and now Canvas – resulted in greater engagement in the process by academics. Availability of reports via VLE, their quality and professional look were also positively received by staff. We therefore moved from piloting Blue to administering it institution-wide with approximately 80,000 evaluations across 2,000 modules. Blue’s response rate dashboard delivers visibility into module evaluations that the University did not have before. Staff can also monitor their own module response rates in real time. Additional reporting features such as ability to collate School-level summary reports and break data down by different demographic variables are widely utilised by the institution.
One of the attractive functionalities of Blue is the option to add module-specific written questions – Question Personalisation (QP) – offering the opportunity to explore different evaluation perspectives and giving academics more ownership of the process. In 2015 the ability to add questions via QP became available to all module leaders. Initially colleagues were able to add up to two additional questions: two scale-based or one scale-based and one free-text question. The additional questions were written by academic staff themselves or taken from the question banks. In 2018-19, the number of additional questions was extended to five. The request to increase the number of optional/self-written questions was driven by module leaders on courses accredited by professional bodies, such as law, accounting and some others. This is to cover, for example, evaluation of competences and experience required and standards for professional ethics.
Results for QPs are available in a separate module evaluation report. They are not part of official institutional quality assurance reporting and are intended for enhancement purposes only. Although, if desired, the results of the module-specific questions can be used in continuous monitoring and enhancement report narratives, module reviews or programme validation. Since the introduction of QP, between a quarter and one third of LJMU’s module leaders engage in the process every year. Level of engagement is different in different Faculties. For example, in the Faculty of Science more module leaders (up to 40%) utilise the opportunity to customise their questionnaires.
Since 2018, module level reports (scores only) are shared with students via Canvas. This is to further enhance engagement of students with module evaluation and close the feedback loop. We are now currently in the process of implementing a student-led question bank. We were delighted that LJMU’s project proposal was successful in the Explorance Faculty Research Grant call aiming to support research projects using course evaluation data to enhance instructional design for teaching and learning.
In summary, LJMU has been using Blue for six years now and we have been able to advance institutional module evaluation process and its outcomes as a result. The level of detail enabled by Blue reporting allows staff to identify the needs of specific groups of students (e.g. those from a ‘minority’ programme) and address them. Closing the feedback loop and enhancing student engagement have also reached a new level. High-quality information and the usability of that information, presented in the reports, are acknowledged formally and informally by LJMU academics and senior management.
Dr. Elena Zaitseva is an Academic and Research Development Officer at the Teaching and Learning Academy at Liverpool John Moores University. She is a recipient of the 2020 Explorance Faculty Research Grant.
Blue•Course evaluations•Higher education•Student Journey Analytics•