ATD Links Archive
Issue Map
ATD Links Archive
ATD Links

Who's Afraid of Kirkpatrick's Four Levels?

According to Kirkpatrick, the day of reckoning is here for training professionals. Senior management is demanding better evidence that training is cost effective; that there is in fact a positive payoff for the organization. Senior leaders want to know specifically what their employees learned (Level 2), exactly how they are applying what they learned on the job (Level 3), and the specific results to the organization from the training (Level 4).

Evaluation defined

Often a hot topic for training departments, evaluation is defined by organizations in different ways. In Qualitative Research and Evaluation Methods, independent evaluation consultant Michael Quinn Patton explains the program evaluation began in the early 1900s under Thorndike’s leadership with a focus on educational testing.

In Utilization Focused Evaluation: The New Century Text, Patton defines evaluation as: “The systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future programming.”

Program evaluation under the first part of this definition, focused on outcomes, became known as summative evaluation; evaluation for program improvement became known as formative evaluation. Today both forms of evaluation are important. In Evaluative Inquiry for Learning in Organizations, authors Preskill and Torres add to this definition by emphasizing that evaluation needs to be linked to organizational work practices, including:

  • personnel interests in using evaluation logic
  • personnel involved in the organization’s evaluation processes
  • focus on individual growth within the organization.

Benefits of evaluation

In Evaluation in Organizations: A Systematic Approach to Enhancing Learning, Performance, and Change, authors  Darlene Russ-Eft and Hallie Preskill emphasize that for organizations to benefit from evaluation, evaluation must be integrated systematically into the organization’s work practices and processes. When this occurs several benefits accrue:

  • quality is improved
  • workers become more knowledgeable
  • resources can be better prioritized
  • planning and delivery of organizational initiatives is improved
  • workers are held accountable
  • recognition of program effectiveness is enhanced
  • workers with evaluator competencies are in demand in the market.

In Evaluating Training Programs: The Four Levels, Donald Kirkpatrick and James Kirkpatrick propose that there are three primary reasons for evaluating a program: 1) future program improvement, 2) decision to keep or drop a program, and 3) to justify the training department.

Meanwhile, Patton compares program evaluation to quality assurance and asserts that in program evaluation:

  • the focus is specifically on program outcomes and process
  • aggregated data is used
  • judgments are goals based
  • the intent of the evaluation is for decision makers.

Quality assurance on the other hand is focused on individual outcomes and processes, individual performance data, objective results, and intended use is individuals and their supervisors. Given the benefits of evaluation, why are many programs not evaluated or evaluated only at the individual quality assurance level?
Methods of evaluation

According to Russ-Eft and Preskill, in the late 1950s Kirkpatrick noticed that most evaluation of training programs could be placed in four categories: 1) reaction, 2) learning, 3) behavior, and 4) results. The simplicity of this model and ease of remembering catapulted its use in corporations.

Indeed, Kirkpatrick’s model enjoyed the position of “state-of-the art training evaluation” through the mid-1990s, and has inspired the development of other models that incorporated its “look and feel.” Among the newer models are:

  • Richey’s 1992 Systematic Model of Factors Predicting Training Outcomes
  • Five-Level Model of Evaluation by Kaufman, Keller, and Watkins
  • Navy Civilian Personnel Transfer Model
  • Hamblin’s Five-Level Model
  • Training Efficiency and Effectiveness Model (TEEM)
  • Holton’s HRD Evaluation Research and Measurement Model
  • Foxon’s Stages of Transfer Model
  • Brinkerhoff’s Stage Model
  • Phillips and Phillips’s ROI Model—Kirkpatrick Level 5 Evaluation
  • Garavaglia’s Transfer Design Model.

In a qualitative embedded multi-case study of seven learning leaders (CLO, SVP, and VP) in the financial services and construction industries focused on the critical success factors in defining, designing, developing, and delivering online learning for adult professional development in corporations, all seven of the leaders used Kirkpatrick’s Four Levels of Evaluation for program evaluation. Of interest was the fact that while all 7 used Level 1 (Reaction) and Level 2 (Learning), only one of the learning leaders used Level 3 (Behavior) and only one used Level 4 (Results).

Advertisement

Three out of the four who were not using Level 3 asserted that they were in the process of trying to evaluate behavior changes in the programs evaluation or at least doing it anecdotally. One of the three was also in the process of trying to implement Level 4.

Why is evaluation neglected?

The reasons why evaluation is neglected is well-documented. From experience and observation delivering learning programs to learning leaders over three decades, Russ-Eft and Preskill offer their top 10 reasons why evaluation is neglected:

  • lack of understanding of the goals and purposes of evaluation
  • fear of the impact of evaluation results
  • lack of evaluator skills in the organization
  • considered and add-on task
  • belief that evaluation results will never be utilized
  • view that it is too time consuming and labor intensive
  • cost versus benefit does not justify
  • leaders think they already know the answer
  • bad prior experiences with evaluation
  • no one requires it. 

In one instance the anecdotal reviews of a top program for financial advisors were so excellent that the learning leader refused to quantify the Level 3 and Level 4 results for fear that they would not live up to the organization’s beliefs about program success. In many cases there were no internal resources capable of performing the Level 3, Level 4, and Phillip’s Level 5 evaluation, and the cost of hiring an outside consulting firm to perform the evaluation was not perceived cost justified.
Enter evaluator competencies

Evaluation of training programs goes back to the early 1900s. For decades, Kirkpatrick’s model has been espoused as the model of choice for organizations evaluating training. Despite praise for Kirkpatrick’s model and many of its predecessors, training organizations rarely implement evaluation beyond Level 1 (Reaction) and Level 2 (Learning).

Given that top management and senior leadership are demanding evidence for results and the ROI associated with training, it is time to develop and standardize evaluator competencies and train both internal and external personnel on these competencies.

The current list of ibstipi competencies was designed for both internal staff and external consultants across industries and organizations and includes 14 evaluator competencies in four domains. The competencies are further explained by 84 performance standards. The competencies were validated through a global sample of 450 practitioners and reflect the knowledge, skills, and attitudes needed by a practitioner to be a competent evaluator. For a complete list of evaluator competencies go to www.ibstpi.org/download-center-2 and select download evaluator competencies.

Advertisement


Further feading

Armstrong, A. W. (2007). Executive beliefs about the critical success factors in defining, designing, developing, and delivering e-learning for adult professional development in corporations. Teachers College, Columbia University). ProQuest Dissertations and Theses, , 334. Retrieved from http://search.proquest.com/docview/862347883?accountid=35812. (862347883).

Armstrong, A. W. (2008). Executive beliefs about the critical success factors in defining, designing, developing, and delivering e-learning for adult professional development in corporations. Online Submission. Retrieved at http://www.eric.ed.gov/contentdelivery/servlet/ERICServlet?accno=ED501621

Ibstipi (2014). Evaluator competencies. Retrieved from http://www.ibstpi.org/evaluator-competencies/

Kirkpatrick, D.L. (2009). Are you REALLY using the Four Levels? Kirkpatrick Partners. Retrieved from Kirkpatrickpartners.com

Kirkpatrick, D. L. & Kirkpatrick, J. D. (2006). Evaluating training programs: The Four Levels (3rd ed). San Francisco, CA: Berrett-Koehler

Patton, M.Q. (1997) Utilization focused evaluation: The new century text. Thousand Oaks, CA: Sage.

Patton, M. Q. (2002). Qualitative research and evaluation methods. (3rd ed.). Thousand Oaks, CA: Sage

Preskill, H. & Torres, R.T. (1999). Evaluative inquiry for learning in organizations. Thousand Oaks, CA: Sage

Russ-Eft, D; Bober, M.J., delaTeja, I., Foxon, M.J., & Koszalka, T.A. (2008). Evaluator competencies: Standards for the practice of evaluation in organizations. San Francisco, CA: John Wiley & Sons.

Russ-Eft, D. & Preskill, H. (2001). Evaluation in organizations: A systematic approach to enhancing learning, performance, and change. Cambridge, MA: Basic Books