Earlier this year we started a blog series that discusses our perspective on program impact – how to facilitate it, how to measure it, and why it’s a major driver behind what we do at Tuck Executive Education.
As part of this series, we wanted to learn more about the ways in which our Learning & Development colleagues are measuring program effectiveness. For reference, here are the previous posts in this series:
In our survey, we asked what methods are being used for evaluating effectiveness of executive education programs, and here’s what respondents said (multiple answers allowed):
- 85% use program evaluations completed by participant
- 70% use post-program follow-up surveys
- 35% conduct post-program interviews
- 30% track business impact of an action-learning project
When asked what other methods are being used, respondents gave a range of answers including:
- Participant behavioral changes as reported by their manager
- Even more in-depth feedback with a 180 or 360 review
- Net Promoter Score in Level 1 surveys
- Measure lagging indicators
As you might recall from Donald Kirkpatrick’s classic four levels of measurement, the higher up the ladder we go, the more challenging and resource-intensive it is to isolate and measure impact of learning.
Here’s a refresher of Kirkpatrick’s classic four levels, first published over 50 years ago and still the bedrock of most discussions about program evaluation:
Level 1 Reaction—What was the participant level of satisfaction with the program experience?
Level 2 Learning—What did they learn? Did they increase their knowledge, skills, or capabilities?
Level 3 Behavior—What changes in behavior resulted from application of the learning?
Level 4 Results—What was the impact on the business of their performance back on the job?
Not surprisingly, the vast majority of respondents are using program evaluations completed by the participant and post-program surveys—level 1 measurement. A few respondents cited level 3 and 4 measurement tactics, such as asking for feedback on behavioral change by a manager and 360 reviews. One method that some of our custom clients use is measuring business impact of action-learning projects that are embedded into the program during the design process. It’s a great way to promote application of learning.
For example, in the Tuck Global Leadership 2030 Consortium (GL2030), team-based, hands-on, action-learning projects are at the heart of the learning experience. These projects require company teams to develop innovative approaches to their own global challenges through cross-functional, cross-business, and cross-border collaboration. This process is one of the most valuable program components, enhancing participants’ ability to build a stronger, more team-focused organization while applying their learning to a company-specific challenge or opportunity.
Another example, which is cited in our first blog in this series on program impact, is from ING Americas. Two programs that Tuck co-designed with ING Americas were each organized around a single action-learning project for a group of high-potential senior managers. The company was able to directly see the impact of their work on business performance and process improvements. Based on the efforts of one program’s cohorts, a multicultural sales effort in the U.S. insurance business tripled the division’s overall in its first year, and continued to outperform other areas, even in challenging times.
When organizations measure the impact from these specific projects, it often reveals a strong ROI on the learning initiative. The point here is that planning for strong outcomes is often embedded into the program design, which happens on the front end. This is key to the work that the Tuck custom team does with its partners to co-create learning initiatives that drive business impact. If you’d like to explore custom programs, please contact us.