Intervention Evaluation
The intervention evaluation takes place once the intervention has been implemented. At this time, it is the job of the evaluation specialist to examine and consider the impact of the intervention on various aspects of the organization. Also very important is the evaluation specialist’s role as a sort of intervention ambassador to stakeholders. Stakeholders cannot be expected to intuit the many possible benefits of a successful intervention. Therefore, the evaluation specialist can and should use positive information gleaned from the evaluation to inform stakeholders of what has been accomplished by and through the intervention.
The method used to evaluate performance interventions is the Kirkpatrick Evaluation Model. This consists of evaluating four levels of performance:
1. Reaction
For a training intervention, a questionnaire is used to gather data used to assess participants’ feelings or attitudes about the quality of the intervention, including clarity of objectives, quality of delivery, value, and results. Participants may reveal barriers to success in the intervention through a self-assessment of their own accomplishment.
Training is the intervention most often assessed for participant reaction, however, this does not mean that good information cannot be gleaned from a reaction assessment on other types of interventions. Reaction assessments can provide information about emerging trends that could impact the intervention, changing requirements that could impact objectives, and emerging problems with implementation efforts. And, periodic feedback requests are a means of gauging and garnering continued commitment to the intervention.
Reaction is measured through a five-step process (Rothwell, 2007):
1. Clarify what participant perceptions would be most useful to know
2. Prepare a written questionnaire and have it vetted by a sample of participants and stakeholders. This is to gauge how well participants feel the questionnaire “asks the right questions” to get the most useful feedback.
3. Finalize the participant questionnaire and agree with stakeholders on specific groups of participants who should receive it, why and how.
4. Send out the questionnaire periodically, measure results and use them as feedback.
5. Use the results of the questionnaire to demonstrate the value of the intervention to stakeholders.
2. Learning
Learning-level evaluations seek to understand how participant behavior and attitudes were impacted by the intervention. These evaluations can be measured on the individual (learner) level as well as on the organizational (management) level.
In a training evaluation, learning can be most easily measured through the assessment built into the instructional design process. Done correctly, each learning objective consists of a behavior, a condition and a degree. A five-step process is used to evaluate how well each aspect of the learning objective has been met.
1. The terminal objective defines what learners should be doing as a result of the training;;
2. Select the appropriate assessment tool (quiz, performance assessment, etc.);
3. Create assessments based on steps 1 and 2;
4. Test the assessment with a pilot group of participants;
5. Monitor quality of assessment results, making changes to the assessment as needed.
The obvious advantage of learning evaluations is that they provide information about learner progress. The shortcoming of this type of evaluation is that it does not ensure behavior and skill transfer to the job, which is the goal of any intervention.
3. Behavior
The behavior-level evaluation looks to see to what extent participant behavior changed as a direct result of the intervention. Because behavioral change is the goal of any performance improvement intervention, it is important to determine the extent of the impact. That is to say, we are looking for the type and degree of behavioral change and the number of people affected.
The first of two approaches commonly used is a questionnaire. Like a 360, this questionnaire seeks to know the perceptions of participants, their supervisors and subordinates on the effectiveness of the intervention. The second approach, called a structured behavioral observation, is exactly what it sounds like: HPI practitioners visit the place of intervention and observe participants in order to determine the behavioral impact of the intervention. This can be measured in seven steps:
1. Refer to terminal performance objective to determine what behavior should change;
2. Ask stakeholders to identify behaviors that should change;
3. Finalize the list of on-the-job behaviors that should change;
4. Finalize metric methods to be used;
5. Measure behavioral change that stems directly from the performance intervention
6. Periodically feed back data to stakeholders (participants and leadership)
7. Take corrective action as necessary when behavioral change does not occur as expected. (Rothwell, 2007)
The strengths of the behavior-level assessment is that it provides information on behavioral changes directly related to the intervention, but as well organizational culture changes which can occur as an indirect result. The performance team must be wary to not let stakeholders believe that interventions alone suffice to produce lasting behavioral change. Stakeholders must understand the importance of cultivating an organizational climate which supports growth and change in the desired direction.
1. Results
The results-level evaluation seeks to answer two key questions: 1/ were the desired results obtained and 2/ was the problem solved from a business perspective (financial gain)?
There are four steps involved in evaluating intervention results:
1. Determine all costs associated with the intervention;
2. Estimate the benefits received from the intervention;
3. Subtract the costs from the benefits;
4. Feed back bottom-line cost-benefit information to stakeholders and participants.
(Rothwell, 2007)
As with the other levels of evaluation, it is noteworthy to re-iterate the importance of reporting results back to stakeholders and participants. This is because sharing information, especially when indicative of imminent success, is critical to maintaining commitment. The intervention process can be quite long and it should not be taken for granted that stakeholders and participants will intuit or understand the value of the process without being given regular and meaningful feedback.
References
Rossett, A. (1999). First Things Fast: A Handbook for Performance Analysis. San Francisco: Pfeiffer.
Rothwell, W. H. (2007). Human Performance Improvement (2nd ed.). Routledge.
The intervention evaluation takes place once the intervention has been implemented. At this time, it is the job of the evaluation specialist to examine and consider the impact of the intervention on various aspects of the organization. Also very important is the evaluation specialist’s role as a sort of intervention ambassador to stakeholders. Stakeholders cannot be expected to intuit the many possible benefits of a successful intervention. Therefore, the evaluation specialist can and should use positive information gleaned from the evaluation to inform stakeholders of what has been accomplished by and through the intervention.
The method used to evaluate performance interventions is the Kirkpatrick Evaluation Model. This consists of evaluating four levels of performance:
1. Reaction
For a training intervention, a questionnaire is used to gather data used to assess participants’ feelings or attitudes about the quality of the intervention, including clarity of objectives, quality of delivery, value, and results. Participants may reveal barriers to success in the intervention through a self-assessment of their own accomplishment.
Training is the intervention most often assessed for participant reaction, however, this does not mean that good information cannot be gleaned from a reaction assessment on other types of interventions. Reaction assessments can provide information about emerging trends that could impact the intervention, changing requirements that could impact objectives, and emerging problems with implementation efforts. And, periodic feedback requests are a means of gauging and garnering continued commitment to the intervention.
Reaction is measured through a five-step process (Rothwell, 2007):
1. Clarify what participant perceptions would be most useful to know
2. Prepare a written questionnaire and have it vetted by a sample of participants and stakeholders. This is to gauge how well participants feel the questionnaire “asks the right questions” to get the most useful feedback.
3. Finalize the participant questionnaire and agree with stakeholders on specific groups of participants who should receive it, why and how.
4. Send out the questionnaire periodically, measure results and use them as feedback.
5. Use the results of the questionnaire to demonstrate the value of the intervention to stakeholders.
2. Learning
Learning-level evaluations seek to understand how participant behavior and attitudes were impacted by the intervention. These evaluations can be measured on the individual (learner) level as well as on the organizational (management) level.
In a training evaluation, learning can be most easily measured through the assessment built into the instructional design process. Done correctly, each learning objective consists of a behavior, a condition and a degree. A five-step process is used to evaluate how well each aspect of the learning objective has been met.
1. The terminal objective defines what learners should be doing as a result of the training;;
2. Select the appropriate assessment tool (quiz, performance assessment, etc.);
3. Create assessments based on steps 1 and 2;
4. Test the assessment with a pilot group of participants;
5. Monitor quality of assessment results, making changes to the assessment as needed.
The obvious advantage of learning evaluations is that they provide information about learner progress. The shortcoming of this type of evaluation is that it does not ensure behavior and skill transfer to the job, which is the goal of any intervention.
3. Behavior
The behavior-level evaluation looks to see to what extent participant behavior changed as a direct result of the intervention. Because behavioral change is the goal of any performance improvement intervention, it is important to determine the extent of the impact. That is to say, we are looking for the type and degree of behavioral change and the number of people affected.
The first of two approaches commonly used is a questionnaire. Like a 360, this questionnaire seeks to know the perceptions of participants, their supervisors and subordinates on the effectiveness of the intervention. The second approach, called a structured behavioral observation, is exactly what it sounds like: HPI practitioners visit the place of intervention and observe participants in order to determine the behavioral impact of the intervention. This can be measured in seven steps:
1. Refer to terminal performance objective to determine what behavior should change;
2. Ask stakeholders to identify behaviors that should change;
3. Finalize the list of on-the-job behaviors that should change;
4. Finalize metric methods to be used;
5. Measure behavioral change that stems directly from the performance intervention
6. Periodically feed back data to stakeholders (participants and leadership)
7. Take corrective action as necessary when behavioral change does not occur as expected. (Rothwell, 2007)
The strengths of the behavior-level assessment is that it provides information on behavioral changes directly related to the intervention, but as well organizational culture changes which can occur as an indirect result. The performance team must be wary to not let stakeholders believe that interventions alone suffice to produce lasting behavioral change. Stakeholders must understand the importance of cultivating an organizational climate which supports growth and change in the desired direction.
1. Results
The results-level evaluation seeks to answer two key questions: 1/ were the desired results obtained and 2/ was the problem solved from a business perspective (financial gain)?
There are four steps involved in evaluating intervention results:
1. Determine all costs associated with the intervention;
2. Estimate the benefits received from the intervention;
3. Subtract the costs from the benefits;
4. Feed back bottom-line cost-benefit information to stakeholders and participants.
(Rothwell, 2007)
As with the other levels of evaluation, it is noteworthy to re-iterate the importance of reporting results back to stakeholders and participants. This is because sharing information, especially when indicative of imminent success, is critical to maintaining commitment. The intervention process can be quite long and it should not be taken for granted that stakeholders and participants will intuit or understand the value of the process without being given regular and meaningful feedback.
References
Rossett, A. (1999). First Things Fast: A Handbook for Performance Analysis. San Francisco: Pfeiffer.
Rothwell, W. H. (2007). Human Performance Improvement (2nd ed.). Routledge.