Blog

Best Practices to Enhance Your Learning Measurement Strategy

Written by Steve Lange, Principal Consultant, Explorance.

What’s holding you back?

I recently participated in a half-day workshop focused on Reinventing Learning Operations hosted by Cognota.

One of the breakout sessions featured “challenges with measurement.” Here is a summary of what the group stated:

  • Managers can’t answer the question of what they would like to see changed
  • Too much time to set up the collection and analyzing data
  • Soft skills are tough to measure beyond satisfaction surveys
  • Do not have access to metrics
  • So much change in the business, don’t have performance models to base success off
  • Doesn’t seem to be people asking for any data, just happy to get training

I get it. L&D has been scrambling since the pandemic to get training out the door to the masses in the most efficient way possible. Businesses have changed dramatically in the last four years (and continues) along with reorganizations and changing market conditions. L&D, like other groups, are trying to keep pace and, if no one is asking for things like learning data and metrics, well then, there are other priorities.

However, measurement is part of L&D operations. Measurement is part of business operations. Without performing even the most basic measurement and evaluation activities, there is no evidence that what L&D is putting out into the business is working.

Yes, the business wants “impact” and “ROI” and “lots of evidence.” Okay, you can get to all that, but you must start somewhere. Let’s look at what you need and how to get started. This is not an exhaustive guide or workbook. These are best bets and strong considerations you need to make and put in place. Crawl, walk, run we like to say. Or better yet: Do, Learn, Improve, and Endure.

What do you need?

Why are you doing this? (Why you should do this!)

Take a moment to consider why we measure anything, take training completely out of the picture. I’m an avid griller and use a pellet smoker for a low temperature and slow cook over several hours. I’m constantly monitoring the temperature of the grill and whatever I’m cooking. I want to time my cooking to when folks want to eat and of course cook to a safe and desired temperature. If my grill suddenly falls or rises in temperature, I need to adjust the grill to deliver more or less fuel or airflow. If what I’m cooking is getting done too quickly (or too slow), I need to either adjust expectations with my audience or again, adjust the fuel. Of course, the ultimate measure lies with the consumer. Was it done to their liking? How did it taste? Why did they like it or not like it? How could I do better next time?

Measuring training is similar. L&D is “cooking” up various pieces of learning content, hoping it’s “done” to meet expectations, and then serving it up to the masses. Of course, in this case, the consumers are both the learners and the business. If there is no measurement, no alignment on what “good” looks like and no data collected against expectations, then L&D has no evidence that what they are serving is working as intended. There are no metrics to show “a job well done” or if the training is meeting the goals of the business.

Let’s shift from the kitchen and look at some of the important pieces to get started on a measurement journey. The topics are not necessarily in order, and you can of course be working on a few of them in parallel.

Sponsorship Commitment and Advocacy

Whenever embarking on something new and potentially impactful across the organization, having that project sponsor at a leadership level is critical. Someone that can lead the charge and be the strong and steady guide, providing stewardship, and helping to identify and remove barriers along the way.

A good choice here is a strong communicator, a leader shown to have influence in the organization. Someone committed to continuous feedback and will listen and provide different points of view when needed. Having good business acumen and being able to speak the language of the business with stakeholders is a plus. Ideally, the sponsor will be someone who continues the relationship with measurement after initial deployment and does not immediately move on to something bright and shiny, hence being a committed advocate.

The sponsor does not necessarily have to be at the highest level, such as a CLO or CHRO. Certainly, you want backing and involvement at the C-suite level, however, the charge is usually led by someone closer to the inner workings of L&D or the talent function.

The What, How, and When

Here is often when the first roadblocks occur. Deciding what to measure, determining key metrics, how to collect the data, and most importantly, how to report, analyze, and communicate results can be overwhelming.

What to Measure

The best bet is to start small and focus on what is most important to the business. Find the courses or programs that are both connected to business goals and have a good amount of participation from learners. The idea to is pick a few programs to start, not the entire catalog on your LMS.

Another way to help narrow the list is to identify any “friendly” business partners that sponsor training. Friendly meaning, L&D has a strong, positive relationship established. A business group/partner that wants to prove learning effectiveness, identify gaps, and hold people accountable for improvement. Could be a small group, willing to “fail and learn fast” and be out in front. Maybe the sales or marketing organization? Perhaps a small team within operations? Any group where communication and trust are strong is a good place to start.

Determining Key Metrics

First thing to remember is that while all Key Performance Indicators (KPIs) are metrics, not all metrics are KPIs. What this means is you don’t want a list of 30 KPIs. You might have 15 or 20 metrics, and only a handful should be considered “key.” Focus first on the Effectiveness metrics – categories that can answer questions about:

  • Instructors (are they engaging and help translate content to on-the-job application)
  • Courseware (is the material/content relevant, up to date, accurate, engaging)
  • Knowledge/Skills (is the target audience learning something new)
  • Job Application (will they be able to apply back on the job)
  • Performance (do learners think their job performance will improve)
  • Business Results (what types of business outcomes do the learners predict will occur based on the learning and applying back on the job)

There are other things of course, such as NPS, adding open ended questions, etc. Be sure to shop them around to get buy in. Make sure stakeholders understand what they mean, how they are calculated, and the story you can tell with them. No sense asking something like NPS if the organization does not put any stock into that type of metric.

Collecting the data

There are many tools to collect learning effectiveness data. Many organization use their LMS or choose from a plethora of tools available. Explorance, for example, offers the best in class Metrics That Matter tool for its built-in predictive learning impact methodology, automation features to save hundreds of hours of administration time, and the largest suite of benchmarks in the industry.

When choosing a technology, the goal is to be able to scale it across the organization while not adding burdensome time on the administration and data collection side of the equation. If you find yourself spending an over-the-top amount of time on the administration of a tool and much less on data analysis, insights, and communicating results, it is time to consider a new piece of technology. After all, technology is merely a tool to help enable a strategy or process. How you use the tool makes all the difference.

Reporting and communicating results

Collecting and analyzing data is useless if the insights gleaned are not shared. And not just within L&D. Business stakeholders and partners need to know how their investments in talent development are being used and what is working (or not). The business is looking to L&D as experts to provide guidance and recommendations. Having solid data to demonstrate what’s working and where there might be opportunities helps guide the conversation and show that decisions are being enabled by data vs. working from assumptions. Set up a regular cadence of sharing reports and analysis. The cadence will be different based on the different roles in the organization. For example:

  • Instructors want to see how they did immediately after delivery of content.
  • Course owners and designers want to see more details of the data and probably at a slight aggregation (monthly usually works well.)
  • L&D executives and business stakeholders might want something more aggregated at a quarterly level.

Identify the stakeholders that have needs for the data and map out what they need, when, and how they want it communicated. Instructors, on one hand, could get a report sent from a system, whereas execs might want a 30-minute show and tell meeting once a quarter.

Starting is the hardest thing

While it may feel daunting, it can all be accomplished in a short amount of time with concentrated focus and effort. The biggest hurdle, as with most things, is to take that first step. Identify some key metrics. Find that friendly group to work with and that has a good portfolio of courses. Communicate early and often with stakeholders, including getting their feedback. Be flexible, tweak and adapt the strategy as you grow capabilities and sooner than later others will be knocking on your door.


Employee learningL&DLearning and development

Get in touch with us about this article.

Stay connected
with the latest products, services, and industry news.