Tuck’s Perspective on Evaluating Program Impact

By Carolyn Clinton

24 January 2014

Nothing matters more to us and our partners than impact—the impact our executive education programs have on the individuals who attend and the organizations they lead. Recently we’ve been talking about how we assess program effectiveness and what steps we take in program design to foster individual and organizational transformation, and wanted to share information from our conversations.

We began with Donald Kirkpatrick’s classic four levels, first published over 50 years ago and still the bedrock of most discussions about program evaluation:

  • Reaction—What was the participant level of satisfaction with the program experience?
  • Learning—What did they learn? Did they increase their knowledge, skills, or capabilities?
  • Behavior—What changes in behavior resulted from application of the learning?
  • Results—What was the impact on the business of their performance back on the job?

Most program evaluation remains focused on Level 1. Participants complete questionnaires assessing the program they have just completed— the content, presentation, learning activities, key takeaways, etc.—and offer suggestions for improvement. Though often disparaged as “smile sheets,” no one—including Tuck—would suggest doing away with this kind of feedback which is very useful.

As you go further up the levels, it becomes more difficult to measure how effective a program has been, especially for executive education. Education—as opposed to training—is focused on higher order thinking skills, including thinking about how to apply the frameworks and concepts being explored to the business challenges executive participants are facing. The curriculum is less focused on specific skills and tactics than in training programs, and outcomes are less easily measured. In the cases where Tuck clients have wanted to measure increase in participant knowledge and understanding through faculty-created competency exams or written responses to questions about their reading, the participants have resented being required to do what felt like “busy work” and the results have had limited value.

For the higher levels of evaluation, measurement becomes even more problematic. So many variables are at play, it is difficult to assume the education experience alone is responsible for behavior change or impact on business performance. Given elapsed time between the educational experience and the on-the-job application, the learning that comes from from day-to-day business and team assignments, and many other inputs, it is virtually impossible to attribute change to learning from a specific program alone. Furthermore, many top companies utilize a variety of approaches to develop their executives, including mentoring, coaching, on-the-job training, and rotation assignments; executive education is typically only a small part of the equation. Rigorous research studies can attempt to control for the variables and offer insights, but they are expensive, resource intensive, and often seem to contribute more to theory than practice, at least in relation to executive learning initiatives.

So what have Tuck and its corporate partners done to gauge application of learning and business impact? Custom clients have used the Success Case Methodology to establish the business benefit of the global executive programs they co-created with Tuck and to identify systemic enablers or barriers to realizing impact. Following that model, Thomson Reuters administered a survey to program participants six months after each program and then conducted follow-up interviews with a sample of respondents. Survey questions focused on such topics as whether the participant had used one or more of their learnings on the job, and whether they could identify how they used learnings to create a positive impact. Specific examples were identified as well as factors that impeded or enhanced these outcomes. Examples of how newly-formed cross-company relationships resulted in business benefits were of particular importance for a company created when two large organizations were joined together, resulting in a variety of global and cultural integration challenges.

When action-learning projects (ALPs) have been integrated into the custom program designs, our client partners are typically able to identify the business impact even more directly. Over the many years Tuck has worked with Hasbro, the work of ALP teams has had a major impact on a number of the company’s strategic initiatives. Examples of results:

  • A highly successful new direct-to-consumer channel was launched.
  • Core brand guidelines were adopted.
  • An expansion into non-traditional retail channels was realized.
  • Important input to a major corporate strategy project was integrated.

Two programs that Tuck did with ING Americas were each designed around a single action-learning project for a group of high-potential senior managers. The company was able to directly see the impact of their work on business performance and process improvements. Based on one program cohort’s efforts, the initial multicultural sales effort in the U.S. insurance business tripled the division’s overall sales (in double-digits) in its first year, and continued to outperform other areas, even in challenging times; in addition, the multicultural sales enterprise quickly expanded to other business lines.

These examples of success in measuring program impact all come from custom engagements. The impact of open programs with participants from many different companies is harder to measure unless a formal evaluation research project is carried out which has not yet been done. What we do focus on in both our open and custom programs are design features that foster application and impact. We’ll discuss those in our next blog post on increasing program effectiveness.

Learn more about Tuck Executive Education