Evaluation an ongoing Learning and Development challenge

How best, and when to, evaluate management development programmes

Understanding how best, and when to, evaluate management development programmes can be tricky for even the most seasoned of L&D professionals. Our Head of Custom Programmes, Margi Gordon, looks at the options.


Management development – an investment in time and money

Developing managers can be an expensive investment.  Whether it’s to keep up with the latest thinking, to ensure their people are as well, if not better qualified than the competition, or to develop execution capabilities, organisations need to develop their employees. And for good governance, it is important not just to account for the results achieved from the investment, but also to ensure that time spent out of the business produces a significant impact.

Most HR departments are interested in understanding the impact of programmes they have commissioned, especially for their most senior managers. But often the more senior the manager, the greater the variance in their backgrounds, roles and responsibilities, making it a challenge to design a stretching programme that benefits all participants. Evaluating the impact of such a programme can be equally difficult. 

Individual learning journeys require measurement beyond the quantitative

At London Business School, we recognise that challenge. Whilst offering the best teaching from globally acknowledged thought leaders, we also build in to our programmes, opportunities for individual coaching, personal leadership experiments, discovery learning and personal reflection. 

Based on best practice principles of adult learning, this approach ensures that every participant has an individual learning journey. The subsequent evaluation requires more than a simple quantitative assessment of satisfaction.

The most effective evaluation measures are agreed up-front

Evaluation measures need to be agreed at the start of the programme design.  We hold discussions with key stakeholders about what needs to change and why. Together, we then build a set of success criteria and agree potential measures. A detailed needs analysis informs not only the programme content, but also the desired outcomes.

Beyond the ‘happy sheet’

Of course there are additional benefits to evaluating development. Evaluation of learning gives participants a voice, allowing them to feedback to those providing and commissioning the training. This ensures relevance of content, quality of delivery, consistency of message and gives scope for improvement, enhancing the experience of future participants.

Such participant feedback is usually gathered by the traditional end of programme questionnaire, often known somewhat disparagingly as a ‘happy sheet’. 

This is used to gauge the immediate reaction of participants, but its results can be influenced by how and when it is completed and to whom it is returned. Depending on the culture of the organisation, and the setting in which it is completed, the ‘happy sheet’ can be influenced by the apparent social desirability of certain responses and may not be an honest reflection of the views of the participants.

Whilst completing the evaluation anonymously and online is likely to produce the most forthright results, delegates' immediate reactions are simply not an accurate evaluation tool for assessing the long-term impact of development programmes.

Kirkpatrick – a simple yet limited tool

Kirkpatrick’s work from the 1960s is still used in many evaluation studies today. His four stages (or levels) model from Reaction, through Learning, to Behavioural Change and then Results benefits from its simplicity, but also has limitations at each stage.

The immediate reaction of participants may measure enjoyment more than impact, and the learning derived depends on a number of things.  For example, is it cognitive or affective learning? Were participants offered repetitions of the key lessons?

The most accurate measure of cognitive learning would be a test and re-test, before and after the programme, but this would require experimental and control groups and a reliable test structure to produce effective results.

A goal such as the personal development of authentic leadership requires a different set of measures, which may only be evidenced by reported behaviours back in the workplace.

360 degree feedback on an individual, collected before and sometime after the programme, can produce evidence of behaviour change.  However, such feedback may be influenced by respondents having increased expectations following a programme.

In our experience, participants often change roles, manage a new team, or report to a different manager, meaning the feedback they receive is only ever a snapshot at a point in time and needs interpretation. Whilst it may provide evidence of change in behaviour and is a valuable learning tool, it will not ‘prove’ the efficacy of a development programme.

For that reason, we see many organisations relying on Kirkpatrick’s levels 1 and 2, and rarely investing in levels 3 and 4, precisely because it is difficult to define the exact cause and effect for a change in behaviour.

Evaluating on the job learning – an additional challenge

If organisations are following Lombardo and Eichingers’ (1996) recommendations and assuming that 70% of learning takes place on the job, then behaviour change may be attributed to work experience rather than a brilliant development programme.

Lombardo and Eichingers suggest that whilst a tough work assignment will provide 70% of the learning required for a manager, 20% of it will come from discussions with colleagues and 10% from courses and reading.

However, if like many of our programmes, the development includes action learning and experimentation in the workplace, there is a strong element of learning application, to follow up on the taught theory.

Transfer of learning back to the workplace is also dependent on a number of factors that are beyond the control of the programme designer (Donovan et al, 2001).

They suggest a total of 14 factors that can affect the learning transfer process.  These can roughly be divided into four groups: the ability to apply the new knowledge and expertise, the motivation to use it, the work environment and the level of support experienced (possible rewards and sanctions) and finally the personal characteristics of the individual learner. These factors can be influenced by programme design and by the expectations (or lack of expectations) within the organisation.

For example, we experienced a senior leadership programme where participants rated the support of their line managers very poorly. However, the HR department was sufficiently confident of their relationship with the line managers to challenge them to improve the structure of one-to-one conversations with their direct reports and discuss their learning. This meant that participants felt more confident in trying out management experiments within the business, with attendant business impact.

Brinkerhoff’s ‘Success Case’ – a useful tool for longer-term impact measurement

Brinkerhoff (2005) suggests the ‘Success Case’ method is a better approach to determining the impact of a development programme.

By surveying participants after a course, and asking them whether or not they had applied their learning, telephone interviews with successful and unsuccessful participants gave clear evidence of the related performance improvement, and how the organisation either supported or impeded their success.

There is no doubt that individual narratives can provide illuminating evidence of organisational blocks, deep attitudinal shifts, and potential return on investment.  Exploratory learning events, such as those that take place on our long-running Leading Edge Programme for Danone, demonstrate that high quality guided reflection, together with an organisational culture that is open to risk taking, enables managers to take bold steps to revise how they run their businesses.

A combination of challenging learning events and an experimental culture, which is openly embraced by the CEO, creates conditions on this programme where senior managers shift their beliefs and behaviour, and go on to achieve extraordinary results.

Shifts in beliefs and values can lead to business breakthroughs

The ultimate goal of anyone involved in learning and development is to provide participants with ‘crucibles’ of learning where they shift not only their expertise and knowledge, but also their beliefs and values.

Having experienced new insights, the learner takes responsibility for trying out a leadership or management experiment. These actions can lead to business breakthroughs, but to create the conditions for innovation, a trusting partnership between the organisation and the development provider, is necessary. At London Business School, our approach centres on creating these conditions.

Stories of impact deliver inspiration beyond statistics

The link between learning and business impact is difficult to evidence, let alone prove, but most of us can remember a moment when we had a significant change in our attitude, usually involving a shift in emotions as we gained new insight.

These moments are best told as stories, rather than illustrated with bar graphs.  As individuals, we can more easily connect with narratives that inspire and challenge organisations to change. We are excited when we know that people have created a better business - the ultimate goal of management development.


Brinkerhoff, Robert O. (2005) The Success Case Method: A Strategic Evaluation Approach to Increasing the Value and Effect of Training, Advances in Developing Human Resources 2005; 7; 86, Sage;

Kirkpatrick, D.L.(1998) Evaluating Training Programmes; the four levels 2nd edition, San Francisco, Berrettt-Koehler Publishers inc.

Lombardo, Michael M; Eichinger, Robert W (1996). The Career Architect Development Planner (1st ed.). Minneapolis: Lominger.

Comments (0)