Case study

The American University in Cairo: Teaching Quality with Improved Evaluation and Documentation Processes

Institution:

The American University in Cairo

Location:

Cairo, Egypt

Details:

6,000 – 7,000 Students

Solution:

Blue, The People Insight Platform

Challenge:

In 2017, the AUC Provost assembled a task force to appraise the quality of undergraduate education at the university. After putting together an expansive set of recommendations and advanced evaluation initiatives, it quickly became clear that AUC’s homegrown course evaluation solution was too rigid and did not have the capabilities to meet the institution’s goals.


Key benefits:

  • Increased response rates by 10% post implementation
  • Reduced system maintenance requirements significantly
  • Provided a better view of quality of teaching, from student as well as peer review
  • In-depth, timely reports on both qualitative and quantitative data for faculty and administrators

The University offers 40 undergraduate, 50 master’s and two PhD programs rooted in a liberal arts education that encourages students to think critically and find creative solutions to conflicts and challenges facing both the region and the world. It also offers a globally recognized, professional, community-based education in English to meet the needs of the dynamic Egyptian and broader regional economies.

Task Force on Quality of Undergraduate Education

In 2017, the AUC Provost assembled a task force to appraise the quality of undergraduate education at the university. The Task Force on Quality of Undergraduate Education included distinguished faculty/administrators from various disciplines as well as two students.

The task force had the following mandate: a) to appraise the quality of education at AUC, b) to devise mechanisms that would enhance and improve good teaching practices across departments, and c) to develop a comprehensive teaching evaluation process that would include a multi-faceted approach.

Results of the study reinforced the notion that students and parents largely attribute the quality of education to the quality of teaching and that for faculty, quality of teaching was considered one of the most influential factors in the quality of education.

After reviewing their findings, the task force put together an expansive set of recommendations.

For the IT team,

  • This meant implementing better feedback gathering and reports at several levels, including:
  • Improved end of term student evaluations. Questionnaires needed to be enhanced and, if possible,
    customized with questions specific to each discipline/department.
  • In-depth peer observation and review. They needed to establish a quality peer review process that could include both formative and summative teaching assessments.
  • In-depth reports and summaries. They needed both individual and aggregated reports to be used for assessing faculty tenure and mentorship.
  • A comprehensive evaluation system for faculty, comprising Student Evaluations, Faculty Peer Reviews, Professional development plans, Faculty Self Evaluations, and Faculty Portfolios.

“After extensive research, they found that for both students and faculty, quality of teaching was considered one of the most influential factor on quality of education.” – Nouran Maher, Director of Technology Solutions at AUC

Homegrown System too rigid to meet demands

For Ms. Maher and her team, there was a clear imperative to find the right technology to support so many advanced evaluation initiatives. “It quickly became clear our old technology was not going to be able to do it,” she said.

In 2005, AUC had switched their course evaluations from paper-based to an online homegrown solution. “Homegrown solutions have their pros and cons,” she said. “It was robust and it was good, but it had a rigid structure. We were constantly managing security compliance issues, and we could get only basic reports. Aggregated data reports for key decision-making required a lot of manual intervention.”

In 2018, the IT team ran an exhaustive search of what course evaluation systems were available. “We wanted a system that had easier administration. We didn’t want to run unit testing for each part of the of the system every time a change was requested. We also wanted to improve our response rates and provide students with a better experience. And we wanted a cloud-based system, to relieve us from the daily burden of infrastructure, servers, storage, compliance, and so on,” she said.

Blue Course Evaluations

AUC selected Blue from Explorance, a best-in-class course evaluation platform that integrates with other systems to automate processes and enable greater data analysis across institutions.

They implemented Blue in 2018 and launched the new course evaluation system in 2019. The university uses Blue to evaluate both the academic and the nonacademic.

“The functional owner of the system is our Associate Provost for Transformative Learning. The system is managed internally by IT, by myself and one full-time staff member. We work very closely with Center of Learning and Teaching CLT and Office of Strategy Management and Institutional Effectiveness SMIE for the course and teacher evaluations especially,” she said.

Increasing response rates

AUC keeps the online surveys open for three weeks prior to the last day of classes. During the open period, surveys are available online from email and the Learning Management system, which students access daily.

With the new system, response rates increased almost 10%, from 41.8% the year prior to 51.4% in 2019. “We attribute this success to several factors,” said Maher. “We send three to four email reminders during the survey period, which is very helpful. We have pop-up reminders from Blackboard. We also reinforce the message that these responses are completely confidential and that the students’ voices really matter. We worked with the student union and teachers to emphasize this message and encourage participation.

Better quality reporting for key stakeholders

From Blue, the IT team provides summative and formative reports for instructors. The summative reports are produced at the end of every semester while formative reports are sent out during the semester, so that instructors can adjust before end of term if they are noticing trends in the feedback. The summative reports include:

  • Overall Instructor Feedback (grouped by faculty)
  • Course Results (by individual course)
  • Departmental Results (for Chairs)
  • School Results (for Deans)
  • AUC Results for all schools (for Provost and senior administrators)
  • TA Results (for Dean of Graduate Studies, Instructors and TAs)

The new reports reflect both quantitative data and qualitative data on the instructor and the course. Most importantly, they include a section for students to add their own comments on self-reflection and self-evaluation. “These sections are not scored, however, different offices do use this data to conduct a comparative analysis to the student evaluations,” she said.

Institutional research conducts in-depth analysis

The institutional research arm, Office of Strategy Management and Institutional Effectiveness SMIE , analyzes the data further. “In addition to the reports from Blue, we also feed the data into our data warehouse and Business Intelligence platform. We’re creating in-depth reports such as analysis trends, regression analysis, and creating models to understand the different correlations,” she said.

In addition to the aggregated statistical calculations, like the mean, standard deviation, and median, AUC is also capturing qualitative data in open-ended questions and free form comments. Instructors can view these comments on their individual reports, but they have also started using this data at an institutional level. “Institutional research is taking the data from the free form questions and grouping them into themes. They then produce summary reports for leadership showing those areas that show strengths and that need improvement.”

AUC is has experimented with Blue to implement the a peer review processes for the English Language Institute, following the Task force recommendations.

“We have just started using Blue for faculty peer review at the English Language Institute,” she said. “We generate the same reports as for student evaluations, but using input from faculty and peers instead of students.”

They will also work with the academic departments to develop a bank of customized questions for each of them, another one of the key recommendations from the Task Force.

With Blue, AUC is now able to gather meaningful information and share data insights to support the vision of improving teaching quality. “This is really changing the way we approach how we measure the quality of teaching. Our leadership use all this data to create mentoring programs for the faculty where they can create a better environment for learning in the classroom, and foster creativity and so on,” she concludes. “By having the right processes and the right technology to support them, we are able to execute on this vision for the university.”


BlueCourse evaluationsEducational experienceEducational technologyHigher educationPeople insight solutionsStudent insight solutions

Want to learn more about Blue?

Stay connected
with the latest products, services, and industry news.