Making Learning Evaluation Work- Lessons from Marketing- Observations from the Webinar



Did we have some good discussion during the webinar on “Making Learning Evaluation Work- Lessons from Marketing & Elsewhere”! As expected we did not find any silver bullets and probably raised more questions than we answered. There was general agreement that learning evaluation could be better despite the challenges.

Some questions raised by the audience:
  • Is evaluation not demonstrating that the benefit of the training improves the business above the hurdle rate of cash otherwise you are taking the business backwards?
  • I would be interested in more examples where organisations have gone beyond surveys and learner impressions?
  • I think that traditionally in “event” based L&D we have seen evaluation as quite separate from the learning process and that has impacted on the use and value of evaluation techniques.  Do you think that the shift away from “event” based learning to more informal, collaborative learning will mean a different approach to evaluation?
  • What’s the panel’s thoughts on utilising external agencies to conduct formal assessments to evaluate the impact on the job?
Here are the summary thoughts of the panellists.

Bob Spence-LearningCafe- Making Learning Evaluation Work- Lessons from Marketing

Melissa Yee – National Survey Lead, National Prescription Service

Learning evaluation works best when it is planned and executed from the very beginning of a program and worked into the relevant touch-points. It need not be too complex as often too much focus on scientific rigour can detract from an evaluation’s true intent. However, it is important to understand sources of bias in your approach and acknowledge external factors that might influence the credibility and validity of your results.

Bob Spence – Director,  Bob Spence Consulting

The single most important issue, often neglected, is the ability to articulate “How you will know when this (learning intervention) has been successful?”  If the answer to that question is locked down and expectations set at the outset, agreed to by all stakeholders and measured accordingly, much of the uncertainty associated with evaluating learning can be overcome.  Stakeholder expectations, in regard to learning outcomes/results, needs to be realistic and managed closely in concert with well-designed communication plans and risk management strategies, but if a business wants to see the changed behaviour, specific ROI or a reduction in component wastage for example, then plan to evaluate it.

There are many challenges associated with learning evaluation, but thoroughness when analysing performance problems and choosing learning as an intervention when other factors such as inadequate job design, user interface design or inappropriate recruitment practices could be the cause or part of it. Quality control as integral across the board, but particularly within environments where unplanned internal factors (like budget/resource constraints) and external factors (like market conditions) influence adjustment and communication of stakeholder expectations.

Learning designers should aim at producing interventions which demonstrate the learning can be and is applied on-the-job with improved performance the result.  This basic mindset immediately places the evaluation task at the Application level (Kirkpatrick’s Level 3) which is a much easier and a more business-oriented way to demonstrate the value of learning than merely attendance/completion of a course and favourable Reaction level questionnaires commonly used (and sometimes analysed).  It also conditions us to tackle complex evaluation associated with measuring impact and ROI.