Depending on the goals set by the project designers, the areas of interest, and the specificities of the involved audiences, different types of evaluation can be used. According to Piskurich (2015), for major types of evaluation can be defined based on the areas they are intended to analyze: reaction, learning, behavior, and results. At this point, it is important to understand that evaluation is commonly used throughout the instructional design process to ensure the consistency of knowledge delivery. However, it is more readily associated with the evaluation that follows the process, either immediately after its completion or after a certain period.
The former is expected to illustrate the observable effects of the project and match it with the predicted outcomes. The latter is used to measure the long-term effects and estimate the sustainability of the effect (Rothwell & Kazanas, 2008). It should also be clarified that both the goals of the evaluation and the choice of methods depends on its type and level.
Reaction
The first level of evaluation corresponds to the reaction of the participants to the project. At this level, the evaluator wants to know what the participants think of the intervention. This can be done for a variety of reasons, but the most popular ones revolve around the possibility of improvement (Reigeluth, 2013). For instance, the goal of the project’s educator may be to identify the weaknesses and strengths of the intervention before it can be administered in a different setting. Alternatively, the results of the evaluation can later be used for developing unrelated projects that incorporate some of the same tools.
On other occasions, an unrelated party (e.g., the employer) wants to find out whether the audience accepts the selected educator. Finally, in some organizations, the perception of the trainees must be used to report the outcomes (Brown & Green, 2015). The most common approaches to measuring reactions are surveys, questionnaires, and interviews administered after the project’s completion. According to Piskurich (2015), the data collection instruments need to be specific and focused enough to exclude ambiguity and eliminate redundant information that would dilute the clarity of results.
Depending on the intentions of the researcher, the level of detail can be adjusted by introducing multiple-choice questions, open-ended questions, and adding an opportunity to provide details, both in written form and orally (during the interviews). Finally, it should be acknowledged that this type of assessment is compatible with the technological means of delivery, with online surveys being among its most widely recognized iterations. Such form offers significant time and resource savings and can be considered more convenient in some settings.
Learning
The second level of evaluation is strongly associated with academic activities, although it is equally applicable to a wide variety of settings. In the simplest terms, this level is intended for the evaluation of the reception of the information. More specifically, the goal of such evaluation is to obtain an understanding of what aspects of the delivered interventions were received better and which failed to meet the planned degree of success.
Another opportunity offered by learning the type of evaluation is to obtain insights into the individual performance of learners. When processed, this information can illustrate the gaps in the intervention and help to predict further discrepancies in performance (Carr-Chellman, 2015). It also can be used to aggregate the data and get an overall picture of the information received. The approaches to measuring learning success vary depending on the specific goals of the researcher. They may include questionnaires, exercises, and tasks that test the ability of the learners to apply the received knowledge to a setting different from that used during the project.
However, regardless of the chosen method, the evaluation tools must comply with certain criteria. First, they must demonstrate a clear connection to the area of interest of the project. Second, they must be properly formulated to exclude misinterpretation by the participants (Brown & Green, 2015). Finally, the complexity of the tasks used in the testing procedure must adequately represent the goals set by the facilitators of the intervention.
Otherwise, there is a possibility that the collected data, while consistent across the group, does not represent the expected outcome correctly. Finally, it is important to understand that the tests must reconstruct the real-life situations where the knowledge is to be applied. To ensure this, the questions are usually reviewed to determine whether they match the situations used throughout the project (and, therefore, whether repetition will yield a positive result without the need to apply critical thinking). Such a review is an essential part of the process and is to be done before administering the test to participants (Piskurich, 2015).
Application
The third level of evaluation is aimed at determining whether the learning resulted in the desired effect. In other words, the evaluator’s goal would be to find out whether the results of the intervention can be traced in the real-life setting and whether these effects lead to the planned improvement (Piskurich, 2015). To some extent, this definition suggests the existence of similarities with the second level evaluation.
The important difference is that while learning evaluation simulates real scenarios in an attempt to predict the likely results, the application evaluation is based on the information obtained from the actual setting after the completion of the project. This type is much more difficult to perform for several reasons. First, to conclusively tie the changes in performance to the intervention in question, it is necessary to collect two sets of data as well as perform relatively complex analysis (Reigeluth, 2013).
In such a situation, certain important factors may remain overlooked, or their significance is misjudged. Besides, such an approach requires more time and resources, which undermines its popularity (Piskurich, 2015). The data collection is usually performed through the same means as the reaction evaluation but can be enhanced by observations. The data is then processed and correlated with the changes in the outcomes to understand whether the intervention produced the desired improvement.
Result
The fourth and final level measures the impact the changes in learners’ behavior have on organizational performance. This type usually aims at a specific performance indicator within the organization, such as customer satisfaction, regulatory compliance, or profit per employee (Piskurich, 2015). At this point, it is expected that during an instructional design process, a definitive connection was established between the intervention and a performance indicator. Thus, the evaluation can be performed using the standard organizational tools generally used in the segment. It should be noted that the level 4 evaluation shares some characteristics with level 3 and is thus equally challenging and time-consuming.
Conclusion
Despite the seeming complexity of the evaluation process, once it is properly incorporated into an instructional design process, it offers numerous advantages, such as consistent monitoring, the possibility of timely adjustment, and an opportunity to predict future performance. Thus, it is to be considered a crucial element of the instructional design projects.
References
Brown, A. H., & Green, T. D. (2015). The essentials of instructional design: Connecting fundamental principles with process and practice (3rd ed.). New York, NY: Routledge.
Carr-Chellman, A. A. (2015). Instructional design for teachers: Improving classroom practice (2nd ed.). New York, NY: Routledge.
Piskurich, G. M. (2015). Rapid instructional design: Learning ID fast and right (3rd ed.). San Francisco, CA: John Wiley & Sons.
Reigeluth, C. M. (Ed.). (2013). Instructional-design theories and models: A new paradigm of instructional theory. Mahwah, NJ: Routledge.
Rothwell, W. J., & Kazanas, H. C. (2008). Mastering the instructional design process: A systematic approach (4th ed.). San Francisco, CA: John Wiley & Sons.