Blog

Data preparation: how can we make module evaluation data work for you?

Written by John Atherton, General Manager, Europe, Explorance.

Having worked with hundreds of universities globally, a recurring challenge to the full automation of the module evaluation process is data preparation. When data for module evaluation projects is not reliable there are two outcomes: the automation loop breaks and the risk of error in the evaluation process increases. No-one wants to make decisions with ‘dirty data’ or have a report end up in the wrong hands.

So how can we make your module evaluation data work for you, and ultimately the University? This is something we examined in our webinar on 1st April. We explored a series of topics related to data preparation and our Blue system: integrating with data sources e.g. Student Information System; verifying module evaluation data; adding and removing teachers; splitting and merging modules; and tracking and managing the process. We also made the wider link to our mission to help HEIs generate a culture of continuous improvement through best practice approaches to module evaluation feedback.

Here are my three key takeaways from the webinar:

  1. Time-savings through data automation
    Universities, in many cases, do not have information on their existing systems that we would deem as fit for purpose in order for them to make the most of Blue. Our Data Integrity Gateway (DIG) solution, which fine tunes existing university data sets, provides a genuine solution. Using DIG is the natural step towards automating the process. It also maintains integrity, expands data and ultimately enriches analytics. Without DIG, we find that module evaluation administrators are often spending weeks personally manually validating data before launching surveys. This cumbersome and error-prone operation requires chasing lots of departmental staff via email to verify data in spreadsheets in a timely manner.
  2. Decentralising overcomes ‘pain points’
    Rather than trying to do everything centrally, a more effective approach to data preparation is to ask schools and departments to validate data prior to evaluation. Based on our experience the most common challenge in preparing data is staff allocation because alignment to who is teaching the module is changeable and volatile. DIG enables individual schools to record who is teaching a module in a particular seminar. As the biggest pain point for institutions that we are working with this is really important. Durham and Loughborough universities, for example, are using DIG to tackle the issue of having the right teachers assigned to the right courses. Variable evaluation dates are another common challenge.
  3. Opportunities for even greater analysis
    Our systems give institutions the platform to analysis and compare module evaluation feedback from the full range of student demographics. For example, local v international, undergraduate v postgraduate and full-time v part-time, are all brought into university reporting. There is also the ability to ask particular questions of different groups of students, for example mature students just returning from a gap year. DIG is your opportunity to drive the questions, and compare responses across schools, departments and institution-wide. This point is also relevant to our next webinar on 8th April: “Module Evaluation Analytics: Delivering the right reports for the right people at the right time”.


John Atherton is General Manager, Europe at Explorance

 


BlueCourse evaluationsData Integrity GatewayEducational technologyStudent insight solutions

Want to learn more about Blue?

Stay connected
with the latest products, services, and industry news.