Training Evaluation: New Techniques and Strategies Research Paper

Exclusively available on Available only on IvyPanda® Made by Human No AI

Abstract

The ineffectiveness of goal-based and systematized evaluation methods has led to development of new training evaluation methods. This paper reviews some of the present approaches used in training evaluation.

It then provides two training evaluation methods developed in Europe and notes features that make the two methods superior. Additionally, it notes the incorporation of technology in training evaluation. Finally, it provides a brief conclusion of the main points.

Introduction

Training evaluation is a fundamental component of instructional design models that many organizations use. Evaluation instruments and methods assist organizations to determine the usefulness of instructional interventions.

However, regardless of the significance of instructional training programs, there is indication that many training plans are not consistent and ineffective. Possible reasons for the inconsistency and inefficiency include insufficient time and budget allocations, inadequate expertise and poor training schemes and tools.

Additionally, the complex nature of evaluation methods could be the cause of inefficiency and inconsistency of training evaluations that companies perform.

Training evaluations involve numerous and complex factors. Training evaluations relate to factors connected to continuous and dynamic contacts of various aspects and features of companies’ and training goals, training situations, evaluation technologies and trainees (Blanchard & Thacker, 2010).

The objectives of assessment involve several reasons at varied levels. The reasons include workers performance, evaluation of training materials and returns on investment. Diverse departments and people must cooperate with each other in a business to accomplish these functions.

Hence, this paper discusses new strategies and techniques of training evaluations that can result into the success of training programs. Additionally, it includes the use of technology in training evaluations.

This paper also includes literature on training evaluation methods that many organizations have used previously. This paper hopes to enhance the value of training evaluation strategies that organizations use.

Review of the Literature

Many approaches used in training evaluation are based on systematic approaches to training designs. Instructional System Development (ISD) typifies the commonly used methods of evaluation. ISD emerged in the United States after the Second World War ended. In ISD model, training evaluation represents the last stage of a systematic approach to management.

Additionally, training evaluation is performed to improve the intervention methods or to ensure effectiveness of training programs. Hence, training evaluations are categorized as either formative evaluation or summative evaluation based on the purposes of evaluation.

There exist six broad approaches to training evaluation namely goal-based, reactive, open, systems, expert review and quasi-legal evaluations.

Systematized and goal-based training evaluation methods are the widely used training evaluation methods in many organizations. These two perspectives have also influenced different outlines developed for assessment of training plans. Kirkpatrick developed the most influential training evaluation framework that is goal-based (Eseryel, 2002).

Notably, the work that Kirkpatrick did resulted into development of other evaluation models. The form that Kirkpatrick developed is founded on four uncomplicated issues. The questions then transform into four training evaluation levels. These four levels include education, behavior, reaction and results.

Additionally, systems approach also has influential models (Kirkpatrick & Kirkpatrick, 2010). The influential models include the Context, Input, Process and Product Model (CIPP), Input, Process, Output and Outcome Model (IPO) and Training Validation System Approach (TVS).

Goal-based models, like Kirkpatrick’s model, can assist organizations and practitioners think about the reasons for training evaluations. The reasons for training evaluation range from technical and management reasons to political reasons. Nevertheless, these models are not valuable in training assessment.

They do not present the needed steps required to realize the reasons for training. Additionally, these models do not provide the ways through which the results of training evaluation can be used to enhance the effectiveness of training programs.

Many organizations that use these models face numerous challenges in the selection and implementation of the methods to use in evaluation (Eseryel, 2002). The challenge is in the determination of whether the appropriate method is qualitative or quantitative.

Many trainers usually prefer to use these methods due to their simplicity. However, they do not usually consider the available resources and needs. Moreover, they fail to consider the applicability of the model in the organization and the usefulness of the results that the models present.

In contrast, systematized models are helpful in the examination of integrated conditions and backgrounds. However, they fail to present adequate granularity. Systematized training evaluation methods do not provide or show the relationships that they have with training designs used.

Moreover, these models do not comprehensively describe the instruments used and steps involved in training assessment. Finally, systematized evaluation strategies do not show the collaborative nature required in training evaluation.

They do not provide the diverse tasks and positions that different people execute throughout training assessment processes. Hence, new training evaluation strategies are required.

The table below shows a comparison of goal-based and systematized training evaluation methods commonly used.

Kirkpatrick (1959)CIPP Strategy (1987)IPO Strategy (1990)TVS Strategy (1994)
Reaction involves the collection of information about workers’ reactions after training is completed.Context involves the collection of information to determine the needs that training has to address and objectives of a training program.Input involves assessment of system performance pointers like trainee qualifications, accessibility of resources and the relevance of training among others.Situation involves the collection of data before training begins to determine the present level of performance in an organization and then establish the desired performance level.
Education involves examination of whether the objectives for training are achieved.Input involves the determination of training strategies that can assist in the achievement of the desired results.Process involves examination of the planning, aims, delivery and development of a training program.Intervention involves the identification of the reasons for existence of gap between pre-training performance and the desired performance. It also involves assessment of whether training helps to reduce the gap.
Behavior involves examination of improvements in workers’ job performance after they are trained.Process involves assessment of the ways through which a training program is implemented.Output involves the collection of information that results from a training program that has been adopted.Impact involves evaluation of post and pre-training information.
Results involve examination of the benefits and costs of a training program to an organization and assessment of costs, productivity and quality of work.Products involve the collection of information on the results that the training program has achieved after the program ends.Outcomes involve assessment of the longer-term results of training programs and focuses on performance, profitability, competitiveness and so on.Value involves measurement of changes in quality, services, productivity, sales and workers’ performance in terms of money after training workers.

Implications of the Literature

The importance of training evaluation becomes more evident when the amounts of money that organizations use in training and development are examined. American organizations use up to $100 billion on training and development yearly. However, less that 10% of these amounts invested in training results into performance improvement.

This shows inconsistency and inefficiency of training evaluation methods that companies use. According to American Society for Training and Development (ASTD), many businesses only test the responses that workers have towards training programs (Eseryel, 2002).

Many organizations evaluate training programs at level one and two. In the study, ASTD found that 93% of organizations studied performed an evaluation at level one of training while 52% evaluated training programs at level two (Eseryel, 2002).

Additionally, other organizations evaluated training programs at level three (31%) and level four (28%). Based on the study, it is clear that evaluation strategies many organizations use involve superficial analysis.

Evaluation processes used in Europe are also similar to those used in America. The European Commission conducted surveys that explored evaluation methods applied in Europe. The first survey project was Promoting Added Value through Evaluation (PAVE) survey.

PAVE studied small, medium and large organizations that showed commitment to education and evaluation. Consequently, it developed PAVE as a new evaluation strategy. The results of the study showed that formative and summative evaluation strategies are not broadly used (Eseryel, 2002).

However, according to PAVE survey, context and immediate needs for training evaluations are broadly used (Eseryel, 2002). Additionally, managers perform training evaluation responsibilities. Surveys and unofficial response mechanisms are the widely used methods in evaluations.

Many organizations assess the impact of training on workers’ performance. However, they do not emphasize the impact of training on an organization. Additionally, new strategies of training evaluation provides operational reasons for training instead of strategic reasons.

Information obtained through evaluation of training programs are used to make feedback to workers and not for revision of training programs. Organizations’ sizes also influence evaluation processes. Hence, managers are responsible for all activities done in evaluation processes.

The second survey was the Advanced Design Approaches for Personalized Training-Interactive Tools (ADAPTIT). ADAPTIT provides design techniques and tools that guide trainers. Additionally, it is based on standardization principles and cognitive science.

Hence, it is superior to Kirkpatrick’s and systematized models since it provides the tools, design and required standards. In the study, it was realized that there exists no distinction between summative and formative evaluation (Eseryel, 2002).

Additionally, the study found that previous evaluation techniques focused on evaluation of workers’ performance and that the results of evaluation were not used in revision of training techniques. Hence, ADAPTIT incorporates the use of evaluation software to enhance training evaluation methods.

The use of technology to automate training evaluation processes is also a new strategy and technique. Training evaluation should have persistent and influential effect on designs of training plans, workers and resources.

Hence, training designers, training managers and HR managers must be involved in training programs as evaluators. External evaluators normally enhance the validity of evaluation results. However, budget and time constraints limit the practicability of the use of external evaluators.

Hence, organizations have developed new internal cost-effective and convenient training evaluation methods (Eseryel, 2002). However, the challenge that internal training methods face is the inadequacy of knowledge required to conduct training evaluation.

Moreover, internal evaluators may be biased in their evaluation. Hence, organizations can use evaluation systems to evaluate training or instructional programs.

Evaluation systems can enhance productivity, standardize the evaluation processes and eliminate biasness in evaluation processes. Furthermore, evaluation systems can improve the effectiveness of training programs that an organization uses.

Automated training evaluation systems perform two essential activities necessary in evaluation processes. Automated training evaluation systems automate preparation processes through expert assistance and data compilation processes.

A practical or prepared model is used in preparation stages. This enables the evaluator to plan a suitable evaluation. Evaluation systems require key information essential in evaluation processes. They then offer recommendations relating to possible evaluation strategies.

The key information that may be required by evaluation systems include goals of evaluation, evaluation type, size of evaluated group, level of evaluation and instructional objectives (Eseryel, 2002).

Automated training evaluation systems can provide guidance on appropriate evaluation designs, data collection methods, report formats, data analysis techniques and distribution strategies. The instructions that evaluation systems provide are broad and flexible guidelines and strategies.

However, they leave room for human decisions to be involved. This ensures the success of automated training evaluation systems. Evaluation systems can also have automated methods of data collection to improve their efficiency and functionality attributes.

The data compilation functions of the systems can include quick evaluation scoring of processes and declarations, computerization of report interpretations and provision of quick advice.

These abilities enable an organization to establish the strengths and weaknesses of a training program that the organization uses (Eseryel, 2002). Hence, the use of technology assists in improvement of evaluation processes used currently.

Automated training assessment and information collection systems provide integrated elucidations for organizations and training practitioners. They also provide advice on how organizations can revise their training programs and materials.

The recommendation that evaluation systems present on modification of training plans is based on assessment responses obtained. Hence, evaluation information, workers’ performance and revision requirements can be tied to the training program that an organization has developed.

ADAPTIT is an example of an evaluation strategy that involves the use of technology in training evaluation and provides an integrated training evaluation solution to organizations.

Conclusions

Evaluation of training programs has to meet certain goals and requirements for organizations. These goals relate directly to improvement of organizations’ overall performance and improvement of productivity of workers. Hence, the evaluation of training programs assesses whether the goals have been met (Blanchard & Thacker, 2010).

Practitioners and organizations have used various evaluation strategies. Notably, goal-based and systematized strategies have been used. However, these strategies are not particularly effective. They do not provide designs and tools required for comprehensive evaluation of training programs.

Therefore, new strategies have been developed. The aim of this article was to discuss new strategies and methods of training evaluation that can result into the success of training programs. It has presented and discussed PAVE and ADAPTIT as new training evaluation strategies.

Additionally, it discussed the use of automated training evaluation systems as a new technique in training evaluation. These new strategies present integrated training evaluation solutions to organizations and practitioners compared to previous training evaluation methods like goal-based and systematized methods of evaluation.

Additionally, the use of automated systems in training evaluation improves efficiency and effectiveness of training programs. Moreover, it removes the possibility of biasness in evaluation processes. Consequently, the development of these new methods has enhanced evaluation of training programs that organizations and practitioners perform.

References

Blanchard, N., & Thacker, W. (2010). Effective Training: Systems, Strategies, and Practices. Upper Saddle River, New Jersey: Prentice Hall.

Eseryel, D. (2002). Approaches to Evaluation of Training: Theory & amp; Practice. Web.

Kirkpatrick, L., & Kirkpatrick, D. (2010). Evaluating training programs: The four levels. Sydney, Australia. Web.

More related papers Related Essay Examples
Cite This paper
You're welcome to use this sample in your assignment. Be sure to cite it correctly

Reference

IvyPanda. (2019, April 17). Training Evaluation: New Techniques and Strategies. https://ivypanda.com/essays/training-evaluation-new-techniques-and-strategies/

Work Cited

"Training Evaluation: New Techniques and Strategies." IvyPanda, 17 Apr. 2019, ivypanda.com/essays/training-evaluation-new-techniques-and-strategies/.

References

IvyPanda. (2019) 'Training Evaluation: New Techniques and Strategies'. 17 April.

References

IvyPanda. 2019. "Training Evaluation: New Techniques and Strategies." April 17, 2019. https://ivypanda.com/essays/training-evaluation-new-techniques-and-strategies/.

1. IvyPanda. "Training Evaluation: New Techniques and Strategies." April 17, 2019. https://ivypanda.com/essays/training-evaluation-new-techniques-and-strategies/.


Bibliography


IvyPanda. "Training Evaluation: New Techniques and Strategies." April 17, 2019. https://ivypanda.com/essays/training-evaluation-new-techniques-and-strategies/.

If, for any reason, you believe that this content should not be published on our website, please request its removal.
Updated:
This academic paper example has been carefully picked, checked and refined by our editorial team.
No AI was involved: only quilified experts contributed.
You are free to use it for the following purposes:
  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment
Privacy Settings

IvyPanda uses cookies and similar technologies to enhance your experience, enabling functionalities such as:

  • Basic site functions
  • Ensuring secure, safe transactions
  • Secure account login
  • Remembering account, browser, and regional preferences
  • Remembering privacy and security settings
  • Analyzing site traffic and usage
  • Personalized search, content, and recommendations
  • Displaying relevant, targeted ads on and off IvyPanda

Please refer to IvyPanda's Cookies Policy and Privacy Policy for detailed information.

Required Cookies & Technologies
Always active

Certain technologies we use are essential for critical functions such as security and site integrity, account authentication, security and privacy preferences, internal site usage and maintenance data, and ensuring the site operates correctly for browsing and transactions.

Site Customization

Cookies and similar technologies are used to enhance your experience by:

  • Remembering general and regional preferences
  • Personalizing content, search, recommendations, and offers

Some functions, such as personalized recommendations, account preferences, or localization, may not work correctly without these technologies. For more details, please refer to IvyPanda's Cookies Policy.

Personalized Advertising

To enable personalized advertising (such as interest-based ads), we may share your data with our marketing and advertising partners using cookies and other technologies. These partners may have their own information collected about you. Turning off the personalized advertising setting won't stop you from seeing IvyPanda ads, but it may make the ads you see less relevant or more repetitive.

Personalized advertising may be considered a "sale" or "sharing" of the information under California and other state privacy laws, and you may have the right to opt out. Turning off personalized advertising allows you to exercise your right to opt out. Learn more in IvyPanda's Cookies Policy and Privacy Policy.

1 / 1