Clicky

PUBLISHED BY

Article By

Wendy Shepherd
The approach presented in this article has been field tested at Cranfield School of Management using a toolkit to support each stage of the value chain. For more information please contact me at Cranfield School of Management. Winner of the 2021 AMBA and BGA award for the impact of her doctoral thesis. Wendy Shepherd is an expert in learning design and Impact with responsibility for the design and Impact of executive development programmes delivered by Cranfield School of Management. Winner of the 2021 AMBA and BGA award for the impact of her doctoral thesis. Wendy Shepherd is an expert in learning design and Impact with responsibility for the design and Impact of executive development programmes delivered by Cranfield School of Management. https://www.linkedin.com/in/dr-wendy-shepherdcranfield/

Purchase this article as a standalone PDF with full colour magazine layout

I recently read a paper that described return on investment (ROI) as a measure of Leadership Development Impact as the Holy Grail. It sat atop a pyramid that consisted of five levels, the first four of which were taken from Kirkpatrick’s model of Training Validation.

The paper went on to make suggestions as to how this illusive figure could be calculated.

The Holy Grail is a mythical object first featured in a medieval novel by Chretien de Troyes. I would argue that the calculation of ROI from leadership development is on occasion (if not always) also a great work of fiction.

The problem starts with the misconception that objective measures should take precedence over subjective measures. If executive development can even loosely be linked to an objective measure, that measure is often assumed to be more reliable than the firsthand account of benefits provided by those who attended the development.

The second problem is that the Kirkpatrick Model with Phillips ROI grafted on at the top is often interpreted as a value chain that goes something like this:

If we examine each of these links in the impact chain, we reveal challenges that make the model and any calculation of ROI unsound:

Assumption 1 – A positive reaction to development is the first stage in the impact chain

Without a doubt, we don’t want those who attend executive development to have a torrid time. There is no greater pleasure than collecting happy sheets that describe the development as favorable, engaging, and relevant.

But are high satisfaction scores really the first step in a chain of impactful development?

Initially produced to explain reactions to grief and loss, and more recently employed to explain the range of emotions associated with organizational change, the Kubler-Ross Change Curve suggests that meaningful change can occur from initial feelings of shock, denial, anger, frustration, and uncertainty.

Negative emotions during or immediately after a development process may occur where aspects of the development conflict with specific participants’ worldviews. These participants may eventually gain more from their development than those who find it agreeable and give it a little extra thought.

Assumption 2 – For impact to occur their must first be learning

Learning is one source of development impact. It is, however, not the only one. Equally beneficial are the relationships formed, the conversations had, the feelings of self-worth generated by investment, and the chance to step back from the day-to-day and reflect on what is happening and what might lie ahead.

When applied to Leadership Development, the Kirkpatrick model encourages an oversimplified perspective of leadership, as a universally relevant set of actions and capabilities that can be improved through training; rather than a more contemporary perspective of leadership as a socially complex, situationally emergent phenomenon that is not trainable.

What good leadership development does is stimulate the participants to generate new reflections on their own experiences, broaden their thought/action repertoires, and alter their commitment to different courses of action. The value of learning is therefore not so much in how to use models and frameworks, but what is discovered through their application i.e. the train of thought or discussion that emerges and where it leads.

Assumption 3: When learning is applied it has an impact on the leader’s behavior

The assumption that learning is evidenced through changes in behavior is a hangover from the temptation to oversimplify leadership to a trainable quality. There may well be changes in behavior that can be immediately witnessed following leadership development, and there may be a value associated with these changes. But there may also be value in things that are not observable, for example, how do you observe a change in a course of action that would have been taken but has not due to the application of learning? How do you isolate changes in behavior associated with other environmental factors to only identify changes that have been initiated by the development?

Often when it comes to measuring behaviors, we reach for a 360° tool which we see as objective, but the 360 is based on the opinion of others and the attribution of their personal values and sense of what good looks like. It therefore lacks the objectivity that the presence of a Likert scale may suggest.

Assumption 4: Results

At stage four we look for tangible results that we can link to development. Suggestions have been made that metrics associated with Employee Engagement, Staff Retention, and Client Satisfaction are all potentially viable. The greatest flaw in the desire to link organizational metrics to leadership development is that there is no proof of causality and the impact of development cannot be isolated from other factors that have an impact on the measures. This can lead to overclaims of impact where measures have improved and explaining away the relevance when things take a turn for the worse.

Assumption 5: ROI

Putting a financial figure on the improvements witnessed is the last link in a chain of flawed assumptions about leadership development and the impact it has. Those who seek to put a financial figure on development often do so by monetizing the results witnessed in Step 4.

The problem here is that the value generated by a change in these metrics is almost never down to the actions of one leader, furthermore, the actions of a leader are not going to be entirely due to what they have learned during development. This assumption presupposes that they knew nothing prior to the development intervention.

Calculating ROI can be expensive because of the amount of data that has to be collected. Even when there is a figure, and even if the figure were a close representation of the benefits, the results cannot be assumed to be ‘fixed’ benefits that will be achieved every time the development is run.

The nature of leadership development is that the benefits are emergent, and variable according to the changing context of the organization and the makeup of the participants, and what they carry into the room.

Conclusion

If we are to move ahead with our understanding of the impact of leadership development, we need to be brave and walk away from long-held assumptions that proliferate from the use of models such as Kirkpatrick and Philips ROI. This requires us to develop new models of evaluation associated with the contemporary requirements of leadership within specific contexts.

When it comes to leadership development nothing will work in all contexts for all participants. A positive impact from Leadership Development starts with a clarity of purpose linked to the specific participants’ context. This can inform the design; help manage the progress of participants; and finally, be used for evaluation. The impact chain recommended by Kirkpatrick and Philips is reactive, beginning once development has been completed, the money has been spent and the opportunity has been lost. The impact chain being suggested here is more proactive, not only addressing proof of impact but also the design and management of impact.

If the design of an intervention starts with the creation of an impact model that defines expectations of where and how the change will occur, it will be possible to track progress and steer development toward these context-specific aims. Furthermore, if aims are defined and progress tracked using a consistent categorization of expectations across leadership development designs, it will be possible, over time, to develop and share new insights, such as how online development impacts the opportunity for building networks and relationships.

The table that follows suggests a new categorization of outcomes. This is based on recent research, including a review of leadership development case studies and interviews with practitioners. Five categories have been identified but there may be more.

The categories recommended represent initial changes in leadership activity rather than long-term outcomes. Changes in these categories can be identified by questionnaire within three months of participants completing their development. It therefore has the potential to be a cost-effective solution to the challenge of evaluation. As the evaluation takes place within a short period of time and focuses on what the participants have done as a consequence of their development, we can be confident of a causal link. In the longer term, it may be possible to attach a financial value to the outcome of these changes, but this should not be the primary goal. The aim should be to identify grassroots changes in leadership activity associated with a specific development design and context so that we might learn what has worked, and perhaps more importantly not worked so well.

LIMITED-TIME OFFER

$3 for 3 months

then $9.00/month

Share article

you might be interested in...