April 2018
Issue Map
Advertisement
Advertisement
Kirkpatrick_LgFormat.png
TD Magazine

Simplify Evaluation

Monday, April 2, 2018
Audio
Simplify Evaluation

Training evaluation doesn't have to be as complicated as you think.

Training professionals around the world have received the mandate: Show us the value of your training programs to our highest organizational goals. If those words strike fear in your heart, use these straightforward tips to evaluate your training programs.

Advertisement

Define the desired training outcome

When you receive a request for training, it is often a businessperson's interpretation of how to meet a specific organizational need or gap. In a well-meaning way, he has jumped to what he believes is the solution to the problem. However, many businesspeople do not understand adult learning.

Your goal is to discover and understand the underlying problem that generated the training request so you can recommend a solution. Keep in mind that the purpose of training is to improve on-the-job performance and measurably contribute to key organizational results.

Kirkpatrick's four levels of evaluation provide a simple framework to illustrate this.

  • Level 4 (results): the degree to which targeted program outcomes occur and contribute to the organization's highest-level result
  • Level 3 (behavior): the degree to which participants apply what they learned during a training program when they are back on the job
  • Level 2 (learning): the degree to which participants acquire the intended knowledge, skills, attitude, confidence, and commitment based on their participation in the training program
  • Level 1 (reaction): the degree to which participants find the training program favorable, engaging, and relevant to their jobs.

Below are sample questions to ask training requesters or stakeholders to discover their needs and to design a program that will meet those needs and deliver results.

Level 4 results:

  • What outcome do you wish to see after this program? What kinds of outcomes are you seeing today?
  • What would make this program a success in your eyes?
  • What key metrics should improve because of this program?

Level 3 behavior:

  • What exactly do you expect training graduates to do on the job as a result of this program?
  • What would be considered "good performance"? To what degree is this level of performance occurring today?
  • What support and accountability resources are available after training?
  • How will you ensure that training graduates follow through on next steps after training? (You may also ask trusted line managers and supervisors these same questions.)

Level 2 learning:

  • Do you want test scores or other types of data related to the learning accomplished during the program?
  • Are you willing to participate in designing the program or approving the content prior to delivery? Are you willing to participate in the class?

Level 1 reaction:

  • To what degree are you interested in knowing what participants thought about the training program itself?

Typically, businesspeople will be most interested in data about employee and department performance and organizational results (Levels 3 and 4). Focus your evaluation efforts on those areas. You probably also want to know information about the program itself and how participants received the program (Levels 2 and 1). Instead of using the same old postprogram evaluation, list out the key information that will be most useful to you, and use it as a guide for the evaluation you conduct and the specific questions you ask.

Build evaluation tools as you build training content

Once you are clear on what information you need to gather at each of the four levels, build your evaluation tools while you build the content. Just like determining what you need to evaluate, you don't need to overthink your evaluation methods and tools.

Here are a few possible sources of evaluation data that may exist already or that you could build on with a reasonable amount of resources. The key is to first find out what you have available to you and then consider what you may need to build.

Level 4. For the most part, the tools and data for Level 4 evaluation already exist—because if something is important enough to be a Level 4 result, then some area of the organization is already measuring it. See if you can gain access to organizational reports of key metrics that the training solution is designed to affect, such as sales levels, profitability, expenditures and savings in a given area, number of accidents and related liability costs, employee turnover or retention rates, and customer retention rates. If you have trouble finding existing sources of Level 4 evaluation data, then you likely have not defined a true Level 4 result—and you need to aim higher.

The best evidence of training success is a combination of numeric data and the qualitative information that explains and brings it to life. Design a couple of survey or interview questions to ask training graduates, managers, or peers about the positive outcomes that occurred when training graduates applied what they learned in training to their job. Because outcomes, by nature, have multiple contributors, the story behind the success will show the connection of the training program to the performance to the results.

Level 3. The data to evaluate Level 3 behavior sometimes already exist. Look for weekly or monthly reports for information such as individual or departmental sales calls completed, units produced, cost-saving activities performed, orders entered, customer calls conducted, or other performance measures related to the goals of your training program.

You also may want to consider building tools into the training course that can become self-monitoring and reporting tools during implementation. For example, if you have taught a multistep process, provide a checklist. During the training course, participants can use the checklist to practice following the steps. After the course, they can use the checklist when they are performing the process on the job. Then, ask them to submit the completed checklist with a few comments about how things went.

If you conduct a post-training survey or interview, draft a question or two to ask training graduates, managers, peers, or customers whether the training graduates are performing critical behaviors on the job. Consider the exact phrasing so it doesn't seem like you are asking anyone to tattle on someone else; you are simply asking about her experience with or observation of an individual. For example, you could ask customers, "Were all of your questions answered during your call with customer service?"

Level 2. Build formative evaluation touchpoints—evaluation that takes place during the program—into your instructional design. Since Level 2 learning data are primarily for the instructor's and training department's benefit, a focus on the formative saves your resources for evaluating what occurs after training, at Levels 3 and 4.

Here are some examples of formative evaluation that you can incorporate into your programs:

  • Pulse check: The facilitator pauses and asks the class a question to check for comprehension or confidence to perform a task on the job.
  • Group discussions, presentations, or final projects: Any demonstration of understanding or skill can be documented as evidence of learning.
  • Tests, quizzes, or games: Formal or informal knowledge and skills tests can be documented if the data are required beyond the instructor confirming understanding.

Level 1. In most cases, a focus on formative evaluation is most practical for Level 1 reaction, unless your stakeholders expressed interest in seeing more than the typical amount of data when you discussed the program goals.

Advertisement

To ensure relevance, build questions into the program at key points to ask participants to discuss how they see the content being used in their work. The instructor can be mindful of body language that indicates that people are not focused on the class, such as crossed arms, interacting with a cell phone, or looking out the window.

A few questions on a postprogram survey are usually sufficient to gather data about the level of satisfaction with the program. For logistical details, such as facilities and catering, the instructor can gather data with a flipchart at the door or during conversations with participants over lunch and during breaks. The facilitator can report these data in her own evaluation form for the program and save all participants' time for commenting on more important things.

Evaluate and react to data as you receive it

Time is of the essence if you want to maximize your training program's impact. Use formative evaluation data to make on-the-spot changes during the program. For example, if participants are not seeing the relevance of what they are learning to their everyday work, have a group discussion and see if you can foster the connection.

Review the post-training evaluation data to see if any feedback indicates possible challenges in on-the-job application. If so, report those issues to managers and stakeholders and ask for their assistance in resolving them.

Think about how soon after the training course participants will have an opportunity to use what they have learned. Set reminders to check in with them or their managers to see how things are going. Distribute fresh copies of job aids and ask training graduates to use them. Collect completed self-report tools and see what they say. Don't wait 30, 60, or 90 days after a training program to see whether training graduates are using what they learned.

Commonly, you will find that implementation on the job is not at the desired level. Share your findings with managers and stakeholders and work together to increase the desirable behaviors. Make sure that others are using the methods and tools you designed to hold people accountable and encourage them to do the right thing.

As data of preliminary results start to be yielded, ask everyone to share their small successes in team meetings, in a company newsletter or intranet page, or in whatever formal or informal media available. Find a few individuals to interview to tell the story of what they learned, how they used it on the job to perform well, and the subsequent positive outcome. If they can include numbers in their story, that's even better. Find a way to share these stories publicly.

Start small to grow big

If these ideas sound daunting, select the most important program you have that is most critical to your organization's or client's success. Try to gain some leadership support, such as an executive who sees the value of the program and will act as a champion. Implement these ideas for the one program and treat it as a pilot. Document what works and doesn't work in your organization and learn from it.

Try different methods and see what works. Leverage technology in both training and evaluation but balance it with the true performance and impact multiplier: the human factor. Spend time talking with and creating trusted working relationships with your program sponsors and line managers. It will show them that you are working toward being a true strategic partner.

Communicate both successes and setbacks as you go. Sometimes admitting challenges and explaining how you overcome them is a more compelling story than simply sharing a success, because it builds trust as well as a team of core believers. As this team of believers grows, you will begin to build an organizational evaluation strategy. Soon, all important initiatives will have ongoing evaluation as part of the plan, and program results will be maximized.

About the Author

Wendy Kirkpatrick is a global driving force of the use and implementation of the Kirkpatrick Model, leading companies to measurable success through training and evaluation. She is a recipient of the 2013 Emerging Training Leaders Award from Train­ing magazine. Together Jim and Wendy are co-owners of Kirkpatrick Partners.