Saturday, February 6, 2010

Measuring Training Effectiveness: How to Get Started Why Measure Training Effectiveness?


You may have been asked by your manager to start to measure the effectiveness of the training programs you provide. Training resources may be shrinking as your client managers complain more often and more loudly that they have not seen any benefits from having their staff away on training. Many training programs today fail to deliver the expected organizational benefits. Having a well-structured measuring system in place can help you determine where the problem lies. On a positive note, being able to demonstrate a real and significant benefit to your organization from the training you provide can help you gain more resources from important decision-makers.

Conversely, you may have decided yourself that you need to go beyond your usual "smile sheets". External pressures may be leading you to think about improving your current programs. The business environment is not standing still. Your competitors, technology, legislation and regulations are constantly changing. What was a successful program yesterday may not be a cost-effective program tomorrow. Being able to measure results will help you adapt to such changing circumstances.

Measuring the effectiveness of training programs, however, consumes valuable time and resources – time and resources that are already in short supply. You will need to think carefully about how and to what extent you will evaluate the results of training. Donald Kirkpatrick’s four-level evaluation model remains as the most well-known and used model today. Kirkpatrick developed his model in the late 1950s and the model has since been adapted and modified by a number of writers. However, the model’s basic structure has well stood the test of time and I continue to recommend it. The basic structure of Kirkpatrick’s four-level model is shown below.

Level 4 – Results

What organizational benefits resulted from the training?

^

Level 3 – Behavior

To what extent did participants change their behavior back in the workplace as a result of the training?

^

Level 2 – Learning

To what extent did participants improve knowledge and skills and change attitudes as a result of the training?

^

Level 1 – Reaction

How did participants react to the program?

The primary purpose of conducting an evaluation at a particular level is to answer the question posed at that level. Conducting an evaluation at one level is not meant to be better or more useful than conducting an evaluation at another level – it just provides different information. The levels are related, though, as each level provides a diagnostic checkpoint for problems at the succeeding level. So, if participants did not learn (Level 2), participant reactions gathered at Level 1 (Reaction) will reveal the barriers to learning. Now moving up to the next level, if participants did not use the skills once back in the workplace (Level 3), perhaps they did not learn the required skills in the first place (Level 2).

In deciding at which levels to pitch your evaluations, you will need to think about an appropriate combination that will suit your organization’s specific needs and available resources. As you go up the levels, generally speaking, the cost and time required for the evaluation rises sharply. So, you will need to choose wisely.

For example, you may decide to conduct Level 1 evaluations for all programs and Level 2 for skill certification programs only. Because of the cost and effort involved, you may leave Level 3 and Level 4 evaluations for programs of high strategic or operational importance, such as project management training.

Above all else, think specifically about why you are performing a particular evaluation - and write it down. This will help you focus on what’s important when resources get constrained or when someone comes up with a "great idea" that will require a lot of work.

Using the Kirkpatrick Model

So, how do you conduct an evaluation? The basic steps are:

1. Design the evaluation.

This first step involves designing survey questionnaires, formulas and spreadsheets for data entry.

2. Collect the data.

Here, you conduct the survey and focus group sessions and collect operational and business performance data.

3. Analyze the data.

Analysis entails converting the raw data into useful information on which you can make evaluative judgments.

4. Report the results.

In this final step, write and distribute the report and debrief client managers and other interested stakeholders.

In designing your evaluation, you will need to think about your data sources. Where should you get your data? Here are some ideas on appropriate data sources for each level.

Level 1 (Reaction)

• completed participant feedback questionnaire
• informal comments from participants
• focus group sessions with participants

Level 2 (Learning)


• pre- and post-test scores
• on-the-job assessments
• supervisor reports

Level 3 (Behavior)

• completed self-assessment questionnaire
• on-the-job observation
• reports from customers, peers and participant’s manager

Level 4 (Results)

• financial reports
• quality inspections
• interview with sales manager

When considering what sources of data you will use for your evaluation, think about the cost and time involved in collecting the data. Balance this against the accuracy of the source and the accuracy you actually need. Will existing sources suffice or will you need to collect new information?

Once you have completed your evaluation, distribute it to the people who need to read it. In deciding on your distribution list, refer to your previously stated reasons for conducting the evaluation. And of course, if there were lessons learned from the evaluation on how to make your training more effective, act on them!

No comments:

Post a Comment