Capturing the Student Voice at the University of Bristol: Pro Vice-Chancellor Perspectives
Written by Explorance.
In the second of a regular series of interviews with Pro Vice-Chancellors (PVC) at Explorance Blue customer universities, we met leaders at the University of Bristol. In this article, PVC Education Professor Tansy Jessop and Associate PVC (Learning and Teaching) Dr. Mark Allinson discuss their institution’s approach to capturing the student voice.
On institutional strategy for capturing the student voice
Explorance: The student voice underpins good teaching. A lot of literature suggests that mid-unit feedback and closing the feedback loop is a way of activating transformation, enhancement, and change. What is the University of Bristol’s current approach to capturing the student voice?
Tansy: We sought a consistent method for capturing unit/module reviews across the University, both midway through units and at the end of them, hence the work with Explorance Blue. Feedback is a critical aspect of enhancing education, particularly at mid-unit. Students can see the benefit of changes that you are making to support their cohort.
While the NSS and its Assessment and Feedback and Student Voice questions are important, we have been developing our practice in this area for over five years. This is following a QAA review that highlighted the need for unit evaluation. Mark and the team have driven this project by advising me that we need it and leading the questions we ask and university-wide rollout.
Mark: As part of our strategy, we were looking for a consistent means of eliciting and monitoring unit-level feedback and responses at an institutional level. With Tansy, we have a firm strategic lead and the underpinning theory on why they are taking this approach. A key driver is our new Bristol Futures Curriculum Framework. We require good insight into what students think and say about our provision to inform that framework and for students to reflect on their learning as part of our overall approach to teaching and learning.
The place of the student voice in unit evaluation is absolutely pivotal. We have consolidated our setup in mid-unit and end-of-unit evaluations and have a range of questions developed by a working group consisting of Faculty/School education directors, Heads of Schools, and students. We had intense debate on those questions and discussed what people want from it. It was about finding the middle path through all of that.
On staff engagement…
Explorance: What methods have you used to get faculty engaged in the evaluation process at the University? What were some of their initial concerns, and how did you overcome that barrier?
Mark: We have an institutional policy on unit evaluations, mostly in terms of using Explorance Blue. Everyone does evaluations, and closing the feedback loop is designed to allow transparency and transparency over time in terms of actions taken.
Previously, we had a diverse approach to collecting student feedback. Some schools were paper-based, and others had their own electronic system. Initially, there were some objections to the idea of evaluation uniformity across schools. In addition to the working group mentioned above, we have had a variety of user groups and joint forums with staff and students, so their voice is being heard in all phases of operation. Whilst we are at an early stage across the institution (2022-23 being the second year), staff appreciate the Explorance Blue system’s flexibility on evaluations for different types of teaching.
Figures around staff closing the feedback loop have also grown in the second year of using Explorance Blue. Not everybody does, but the majority of staff do, and students are seeing the benefit. We want to bring as many people as possible with us.
Tansy: In general, staff sees the benefit of having the choice of question sets, both as an early warning system for the NSS and as a developmental activity. Our first run of Explorance Blue was wholly about NSS scores, and now it is more holistic: essentially how students are getting on with a unit. The questions we have at mid-unit are a useful reference for staff on where they are, and for students themselves, it allows them to take stock of their progress.
There was some initial mistrust of what the unit/module reviews were for. Some felt it was a way of monitoring staff performance, but others in the group saw it as an opportunity to support the development of individual members of staff. Preparing staff in advance on how to respond to the feedback they get and helping them be prepared to hear the feedback they may not like, is part of the development journey. Staff was worried about potentially abusive feedback, and whilst we stress the importance of constructive and respectful feedback, giving them strategies to respond to different types of feedback is an ongoing activity. It is also about how we build a culture where students give feedback, and everyone is familiar with the concept.
On student engagement…
Explorance: The importance of the student voice has increased over the years, but the pandemic inspired Higher Education institutions to listen harder and more frequently. What were some of your strategies to get and keep students engaged in feedback?
Tansy: Student engagement is a path that we are well embedded in. During the digital pivot, pulse surveys were an important development as we had not used them before. But they told us what went well and what didn’t work for our students. Pulse surveys have informed how we work with the student voice and with students directly.
Our approach is to ‘Stop. Start. Go.’, where lecturers take 15 minutes in class to hear students’ thoughts and have these presented on post-it notes. They then read, summarize, and give feedback on these items in the next class. This has been good. The work we did on the mid-unit question set was particularly important. Not being afraid to move from deploying objective-neutral questions to ones that spark emotions is a good move because education is about head and heart.
Mark: Even before the procurement process (which led to the appointment of Explorance Blue) we did a hackathon where students were asked to design a unit evaluation tool – the concept not the technical detail behind it. What they came up with was remarkably like what we now have with Explorance Blue: one place to go to, all unit evaluations together and available at the same time. We are still rolling it out and driving participation in mid-unit evaluation through the Virtual Learning Environment (VLE).
Sending automated emails support awareness, but we also know that few students act on these. Getting staff to dedicate time in class for the evaluations makes a huge difference in engagement. We also want to take these principles beyond evaluations and to have the student voice in the pre-planning stage of program design.
On increasing response rates…
Explorance: One of the biggest challenges to online evaluations is low response rates when compared to paper-based evaluations. What has the University’s experience with response rates been since implementing Explorance Blue?
Mark: In-class feedback has definitely supported response rates. For one of my classes, for example, we increased response rates from 7% to 78% by allowing 10 minutes after class for students to complete them. This supports a wider move we are taking as a university to reduce the amount of content delivered in class and have more dialogic experiences to reflect on learning.
Like much of the sector, we are also finding that students have a different approach to in-person attendance, and that is driving some of our approach to how we use in-person classes. This has an implication for unit evaluations too. If students are not in class, we may not hear from them. We are also excited because we are hearing from students who do not usually engage with evaluations. Feedback needs to be representative in order for it to be truly reflective of a cohort. So, in terms of participation rates, we are looking at both quantitative and qualitative measures.
Tansy: We had an initial target of 30% response rates, and we are generally there (32% for the last mid-unit evaluations), but we will continually review those targets and strive for improvement. Ultimately, we want to better understand how things are working for students, create a culture around listening and acting on the student voice, and closing the feedback loop.
I love what Mark and the team have done with institutional reporting and feedback, the conversations in class in particular. I rarely hear about survey fatigue from students, though it is something that is raised by staff from time to time. My advice to other universities is to focus on the question sets, look at how much we are asking students and when, and find that equilibrium.
On institutional impact…
Explorance: Where is the University today in terms of its evaluation process, and where would it like to be in the next 3-5 years?
Tansy: The impact to date has been three-fold: Commissioning a new unit evaluation system, the process of developing question sets, and broad acceptance by staff and students that led to increased response rates and a better approach to closing the feedback loop.
In three to five years’ time, we would like this to be evidenced in our NSS scores and through response rates. But ultimately, we want it to show that we are teaching better across the University. Institutional-wide, we hope to establish a sense of partnership with more value being placed on the student voice, and an associated culture for acting on feedback. We are well on the way, and that is very exciting.
Mark: I want us to be in a place where this approach is so well embedded, we do not think about it anymore. That we are using data intelligently and applying it in a more nuanced way to improve our teaching. And that the provision of timely and focused student feedback is seen as a good news story for the University, which it undoubtedly is. Our experience with Explorance Blue has been very good overall, and we want to grow this offer further.
Blue•Course evaluations•Higher education•Student experience•Student feedback•Teaching effectiveness•