Blog

Analyzing and Using Feedback from Multi-Session Events

Written by Rachel Finney, Customer Success Manager – Enterprise | Explorance.

Multi-session events offer organizations the chance to offer a wide modality of trainings to assist participants in working towards a specific goal or objective. When evaluating multi-session events, it is always important to keep the end in mind. What is the end goal? Who are our key stakeholders who need access to this information? And most importantly, what do I do with this information once I have it?

Tools for Analysis

The hard part is over- you’ve collected the responses you need from participants who attended the program at hand. Metrics That Matter has several key tools that allow you to sift through the data collected to identify areas of success as well as opportunities for improvement.

Premium dashboards offer you the ability to view your data at a high level, focusing on metrics such as how the program was received at an overall level as well as how sessions perform against each other. When utilizing standard Metrics That Matter Smartsheets™, there is also the advantage to compare your program against other learning organizations’ programs to provide an industry standard perspective.

Data explorer is an ad-hoc analysis tool that provides a deeper level of evaluation. For example, you have the ability to compare each session against the industry standard benchmark for key metrics such as Scrap Learning or Net Promoter Score. This same comparison can be made for other data sets such as demographics or session instructors which provides another level of clarity from a quantitative perspective.

Ready reports, such as the Learner Comment Analysis, break down each quantitative and qualitative element of feedback in order to pinpoint particular elements of impact such as Content Engagement, Instructor Knowledge, Delivery Effectiveness and Job Application.

By utilizing these tools together, you will have every necessary lens to develop a complete and comprehensive story of learner feedback.

Breaking Down the Parts

In many instances of multi-session events, the parts are just as vital as the sum of the whole.

What does this mean?

Let’s say there is a two-week leadership training program in which our learners will attend live webinars, complete e-learning modules, and end each week in a focus group. If we were only to focus on how the training program was received overall, we miss out on key areas of impact. Different learning modalities are often necessary to provide a holistic experience.

Some learners may find a particular delivery style to be challenging or underwhelming.

They may find in-person or virtual focus groups to be highly impactful and engaging while e-learning modules to be painfully dull and a waste of time. When looking at the program from an overall standpoint, this learner could appear neither satisfied nor dissatisfied with the training. However, when breaking down the parts, you now have a story that conveys the strengths of instructor led, or virtual instructor led, delivery methods while also sharing the potential opportunity to increase engagement in independent learning.

Putting the Pieces Together and Communicating Results

We’ve talked about the power of the parts; however, how do we communicate on the effectiveness of the event as a whole?

After identifying several areas of impact for the various modalities and separate events within the training experience, consider what overarching themes are beginning to emerge.

Was the program an overall success? At the end of the day, would participants recommend others attend this training? Did the right people attend the right training at the right time?

These are some inevitable questions that many stakeholders pose. Being able to answer these questions with specific key evidence to back up your findings saves them the hassle of digging and, instead, focus on what’s next.

Now What? Encouraging Continuous Improvement

Speaking of what’s next, what do we do with all of this data we’ve collected and shared? Be ready to pose specific recommendations or next steps based on the overall multi-event analysis. By providing concrete, actionable items, it takes out the guesswork for involved stakeholders such as curriculum developers or instructor managers who may otherwise not have known what to do with the information provided.

If appropriate, follow up a few weeks or months after the initial discussion to check-in on how the data is being utilized or what steps were taken. Use these check-ins as an opportunity to build out a process of continuous improvement and share out success stories as you collect them.

At the end of the day, learning and development will always be a vital pulse-point for any organization and promoting a consistent, credible process of gathering feedback and sharing out impact opens up a plethora of opportunities in terms of future training endeavors. This process will allow learners to feel heard and encourage open feedback, make stakeholders feel confident in providing valuable training opportunities, and build a strong framework for analytics and development for years to come.

 

Get the Multi-Session Learning Experience Measurement brochure!


Employee insight solutionsL&D effectivenessMetrics That Matter

Measure the effectiveness of your L&D programs.

Stay connected
with the latest products, services, and industry news.