Blog

Proving the ROI of Learning at a Global Accounting Firm

Written by Bob Wagner, Learning Program Leader at Crowe LLP.

All of us in the learning & development (L&D) field are constantly innovating ways to measure and demonstrate the ROI of learning at our organizations. At Crowe LLP—a global public accounting, consulting, and technology firm with 4000+ employees—our L&D team recently shifted from tracking volume metrics (e.g. learner satisfaction, completion rates) to also measuring actual behavior change on the job. In particular, we piloted a project with a vendor Metrics That Matter (MTM) to enhance how we track and prove the ROI of Learning.

As a Udemy for Business customer, I had the opportunity to share some of our lessons learned on this recent pilot project with Metrics That Matter (MTM) in a webinar. Watch the on-demand “Proving the ROI of Learning” webinar with Crowe LLP here. Here are highlights from the webinar and my responses to some of the great follow-up questions.

Redefining our learning & development strategy

We reinvented our learning & development strategy at Crowe LLP by migrating from primarily instructor-led training to a much more blended approach that included webcasts and the use of online learning through Udemy for Business. Leveraging technology was an important pillar in implementing our new learning strategy. For example, Udemy for Business—a provider of curated quality content that could be accessed anytime anywhere—was a key part of our new learning strategy. In order to demonstrate the effectiveness of our learning and this new blended learning approach with our stakeholders, we embarked on a more robust learning metrics process to determine our ROI.

Measuring behavior change on the job

Previously, we tracked completion rates and learner satisfaction (mostly Kirkpatrick Model Level 1), but we wanted to enhance how we tracked the ROI of our training through measuring actual behavior change on the job (Kirkpatrick Model Level 3). We worked with Metrics That Matter (MTM) and leveraged their tool and process to help us gauge whether behavior change had occurred or not over a six-month pilot approach for selected learning programs.

In particular, for the MTM pilot, we asked learners (and their managers) 60–90 days after the training: Are you/they doing anything differently on the job? If so, what percentage of this change can you attribute to learning? The MTM tool automated much of the process for us. All the calculations were built into the MTM tool in the back-end and the survey was conducted electronically with automated reminders. This allowed us to automate low-value tasks like manual data input and enabled the L&D team to spend more time on higher-value work like performing in-depth analysis and conducting more data-driven conversations with key stakeholders.

Using the Phillips ROI methodology

The MTM reporting dashboard also presented the data in digestible graphics using metrics that were meaningful to our stakeholders. For example, at the 60-day mark, we found that 84% of the surveyed learners in the pilot applied the new knowledge on the job. MTM also benchmarked our data with an industry average based on their large dataset, highlighting we were outperforming our peers in many areas. We also took a look at any learning that was not being applied on the job or what’s known as “scrap learning,” so we could determine if any elements of our training required a deeper analysis to either rework or remove from the curriculum.

Through the MTM pilot, we also used the Phillips ROI methodology and inputted average salary and the average amount spent on learning to reach an ROI number. We were encouraged when the results in the pilot showed that for every dollar spent, we received $6.00 back in ROI. This was much higher than the MTM benchmark based on the industry average of $3.50.

When we shared our pilot project on the Udemy for Business webinar, attendees had some great questions which we didn’t have time to address during the webinar. I have provided my responses in this guest blog.

  1. How long did it take you to work with the MTM team to set up your training evaluation strategy? What investment was made for this pilot?

    We first defined our learning & development metrics strategy before embarking on our pilot with Metrics That Matter (MTM). Then we measured the progress of our strategy by leveraging MTM’s technology to gather and report the data. Keep in mind, we have been developing our metrics strategy for the past 3 years. As a result, the MTM evaluations and processes made it simple to determine our approach during the pilot. Our investment was specific to the parameters we negotiated with MTM. I recommend having a clear strategy and destination in mind before making a significant investment in a super-tool like MTM.

  2. What are the right survey questions to ask to get that behavior change?

    We cannot reveal the specific questions we used because those are proprietary to the MTM vendor. But generally, questions related to changes in productivity, customer satisfaction, and sales attributable to training are the types of questions that show evidence of behavior change. However, survey questions alone are not enough. A Level 3 Kirkpatrick process demands feedback from both learners (what are you doing differently) and their managers (my employee is doing…). I also recommend layering that feedback with more tangible information like speed-to-market, promotions, and client satisfaction scores. With all of this data, you can then start to get to behavior change on the job. Also, The Center for Talent Reporting (CTR) allows you access to their library of ROI metrics and accompanying survey questions (literally thousands!)

  3. Were you seeking any Level 3 feedback from learners’ managers before? How did you get them to take the time? Did they see it as duplicative to other talent/performance expectations?

    In the past, we only sought Kirkpatrick Level 3 behavior change feedback from managers on a very limited basis and not in any systematic, consistent way. The new survey was not seen as duplicative by managers. However, as far as the time it took to fill out, the sentiment varied depending on the individual. Those who were interested in providing feedback did so. Those who were not interested tended to ignore the Level 3 requests. As mentioned in the webinar, one of our lessons learned was that we didn’t win over manager support early in the process. Launching the pilot with a change management plan to communicate our new metrics system would have helped build manager buy-in. In retrospect, we felt we could have done more to drive Level 3 feedback from managers.

  4. Does Metrics That Matter take into account the confidence level of survey respondents?

    Yes the MTM methodology does factor in survey bias based on millions of data points over many years.

  5. Can you explain what you mean by “scrap learning” and how you would go about measuring it?

    “Scrap” is any percentage of learning that is not applied back on the job. Questions that lead to this measure are applicability. You can ask learners to rate “the content is applicable to __% of my job” or rate on a scale of 1-5 “I will apply what I learned back on the job.” You can then follow up later and ask learners whether they did or did not apply the learning to their jobs. These responses are self-reported due to the time and cost that is required to perform more detailed analysis. Scrap is a way to engage stakeholders in discussions about what their employees felt did not support their performance. Over time, if people are reporting wasted learning, that should lead to some action.

  6. Can you go over in more detail on how you actually got to the ROI number of $6? Can you provide a high-level answer on how you converted the Level III data responses over to the $6 ROI number (Phillips’ methodology)?

    This type of calculation is an algorithm, somewhat proprietary to MTM. It takes into consideration the estimate of performance increase over time related to training as a percent, expected improvement attributed to training as a percent, percent of content applicable to the job, an adjustment for bias as a percent, over an average salary and estimated training cost per employee. You can find more information on Phillips ROI Methodology online.

  7. Can you share the reaction of your executives when they saw the ROI ($6.00 which is great)? As a result, have you seen higher levels of adoption in other programs?

    Our pilot was only 6 months, so it was not enough time to leverage the ROI calculation completely. ROI is also only part of the story. Our stakeholders also asked a lot of other questions about benchmarks, response rates, reliability of the process, etc. Stakeholders with an organization-wide view were intrigued by the measure and asked what’s the next step in the metrics strategy. Stakeholders with a divisional-view wanted to know about their specific division and how it differed one way or the other from the ROI. Also, bear in mind the sampling of learning solutions within the pilot. We did not measure everything. Instead, we carefully selected programming that lent itself well to a more intense evaluation. The ROI figure would be something to re-evaluate over time across a broader set of learning solutions.

  8. Were you able to see a significant correlation on actual metrics? (so higher MTM results higher KPIs?) Or were you challenged by your executives saying we are giving a $6 ROI but KPIs are not improving?

    The pilot cycle time did not lend itself well to this type of analysis and discussion. During the pilot, the results were not necessarily reported in this manner to our business sponsors and stakeholders. We didn’t have any historical data to compare with our new ROI data. Our next steps involve integrating other data from other systems. With this data, it will be possible to look at this relationship in the future.

  9. Some MTM questions are subjective and based on employee perception (e.g. increased productivity). How were these types of results received by your stakeholders?

    The power of a robust methodology is to look at change over time. Stakeholders did challenge our results based on many things, including learner perception. Our mission is to determine if a systematic, robust capturing of such data over time provides meaningful results. Patterns over time may tell a different story.

I hope this discussion helps answer some of your questions. We learned a lot during our pilot to prove the ROI of learning, and we plan to keep improving our process. I wish you the best of luck on your ROI journey!

To watch the full “Proving the ROI of Learning” webinar on-demand with Crowe LLP, click here.

Bob Wagner is Learning Program Leader at Crowe LLP, and formerly Learning Director at Grant Thornton as well as previously at Deloitte Tax and Arthur Andersen. Bob started his professional career as an audit and tax professional before switching his focus to people development and learning. Over the past 16 years, his focus has been aligning organizational business needs to learning and development solutions. Recently, he has turned his focus to better understanding the impact these solutions are making on business.

This article was originally published on Udemy for Business. Click here to view it on their site.


Employee insight solutionsLearning evaluationsLearning mattersMetrics That Matter

Measure the effectiveness of your L&D programs.

Stay connected
with the latest products, services, and industry news.