Blog
Has Australia ‘cracked’ module evaluation?
Written by John Atherton, General Manager, Europe, Explorance.
I have been supporting universities in relation to course and module evaluation for the past eight years. Having joined Explorance 18 months ago, I’ve found that what UK universities really value from us is our insight into how other countries are approaching the issues, challenges and opportunities around capturing student feedback. Working in Australia, Canada, China, Spain, Mexico, UAE and USA give us compelling insight into what ‘good’ student engagement looks like – and what works.
One of the most relevant markets for UK universities to look at is Australia. We are working with 50% of the HE sector including the University of Melbourne, University of New South Wales and RMIT University. There are 43 universities and eight of these have formed a group in recognition of their status and history known as the ‘Group of Eight’ or ‘Go8’. Academic standing, achievements and student entry standards vary across university groups, with the Go8 universities having the highest standing in all.
Australian universities, by tradition, are generally modelled on British universities and as such Australia looks to the UK for best practice and vice-versa. In Australia, module evaluation is typically undertaken towards the end of the semester and is open for a period of up to four weeks. The increasing trend is to combine both the unit/module/course and teaching staff evaluation in one questionnaire. The general direction of travel is for universities to move from paper or hybrid approaches to fully online evaluation – not least because of the sheer size of institutions in Australia which make paper-based evaluation less practical. There are, however, several other related drivers including:
- The ability for all students to participate; not just those who are in class on the day a paper feedback form is disseminated.
- Reduced respondent burden by not asking students questions they would reasonably expect the university to already know.
- A high rate of mobile phone/device usage at institutions. The convenience of having access to evaluation anywhere at any time, particularly with the evaluation embedded in the Learning Management System or other portal frequented by students, is an attractive method to increase engagement.
- The capability to understand response patterns e.g. the day and time when students are likely to respond, the type of browser and device used, is the response being received on or off campus, and the way in which this varies by student cohort. Each of these are important for understanding how to engage students in the feedback process and how/when to target follow-up notifications.
- Efficient processing of student feedback for sharing with staff and students and, as a result, inform timely improvement initiatives.
So has Australia ‘cracked’ module evaluation? What we have witnessed is an increase in the volume of qualitative feedback received as a result of the move to online evaluation. Students are no longer asked to give their ‘best aspects’ and ‘needs improvement’ feedback in a small text box that can only fit as much as a sentence. There is now more rich data for universities to make an informed decision about the action to be taken as a result of the feedback received.
We can also be more confident of producing accurate response rates. So often we have heard the statement ‘we had high response rates when we used paper’, only to find that some institutions also had duplicate responses meaning their response rate was in excess of 100%. For some universities who want to retain the control they feel they had with paper evaluation, they continue to conduct the evaluation in class.
Additionally, we can take comfort knowing that all students enrolled in a course have an equal opportunity to provide their feedback, irrespective of whether they attended the class when the evaluation was announced or whether they have a disability that can be negated through online technologies. So we are often reaching a broader range of respondents than prior to the transition to online evaluation – a huge plus.
The timely processing of results and making early interventions – be it via identifying areas of best practice and, where relevant, using that best practice in courses that are struggling – is vital for recurring engagement in the feedback process. And, let’s face it, the demand-driven market where understanding the student experience, and being nimble in affecting change, is critical for maintaining a competitive advantage.
Many Australian universities have been early adopters of our Bluepulse platform as they see value in speeding up the improvement cycle. Teaching staff and planning and quality assurance teams are eager to understand what initiatives have been implemented as a result of student feedback and how effective these initiatives are. As student evaluation is typically administered from a central unit, Bluepulse is received as a way of giving back to teaching staff so they can ask students the questions they feel are timely and pertinent for improving the student experience and teaching practice.
However, one of the major challenges that remains for Australian universities (as it is for UK universities) is how to engage students in a feedback process such that they feel they are affecting change. Some universities realise that end-of-term/semester evaluation is often not timely for implementing change in the course, nor for informing students what action has been taken to improve their current module experience. As a result, some universities have implemented mid-term/mid-semester evaluations, but often students perceive this as simply another survey, which may add to survey fatigue.
This is where we have witnessed that more flexible and timely interventions such as our Bluepulse platform demonstrate tangible benefit – the student provides regular feedback at a time that suits them and teaching staff implement timely improvements while including the students in this process. By the time the end-of-term/semester evaluation arrives, students are more likely to be engaged with the feedback process and feel more valued as a result of actively participating throughout the semester.
In the next five years, we expect to see more ‘formative evaluation’ being used by academics in Australia. Students are demanding the ability to provide feedback anywhere and at any point in time, not just in a pre-determined window. They expect the results of their feedback and the resulting initiatives to be shared, and institutions will also want to understand how effective the initiatives are.
UK universities need to consider formative evaluation as part of their module evaluation mix. However, what is perhaps most telling from the Australia ‘case study’ is the importance and value of moving away from paper-based evaluation to online. Because of their size Australia universities are almost forced to make online work, and there are certainly learnings that comparator universities in the UK should adopt.
John Atherton is General Manager, Europe at Explorance
Blue•Course evaluations•Higher education•Student insight solutions•