Accident Prevention Models Case Study

Exclusively available on Available only on IvyPanda® Made by Human No AI

Heinrich’s Domino Model

The model originated from the work of H.W. Heinrich, who used it for the first time in his text Industrial Accident Prevention. It should be noted that Heinrich believed that the main cause of accidents was people and accident control is a managerial problem (Hollnagel & Goteman, 2004). The domino model consists of five phases or steps:

  1. Lack of control. A managerial issue, where the control of a situation and its impact on various factors is stressed.
  2. Basic cause. Human factors, environmental factors, and job-related factors are identified as causes of an accident (Heinrich, Petersen, Roos, & Hazlett, 1980).
  3. Immediate cause. Conditions that are regarded as symptoms of basic causes.
  4. Incident. Contact with the dangerous factor and the subsequent result.
  5. Injury (personal) or damage (property). Both material and physical damage are included.

The advantage of this model is that it was revolutionary in its time, and the following models often used it as a basis (Seo, 2005). It is easy to understand and implement for analysis, and it can help managers address the causes of accidents directly. However, its disadvantages are its simplicity, too specific focus on human error, and causality. Accidents often cannot be explained by a sequence of events as more complex and interrelating factors contribute to their cause (Hamid, Majid, & Singh, 2008). Furthermore, the original model developed in the 1930s emphasized social environment and ancestry as causes of accidents, which shifts the blame onto the perpetrator’s background (Qureshi, Ashraf, & Amer, 2007).

Swiss Cheese Model

Swiss Cheese Model was developed by Reason (2008), who demonstrated how vertical association of various factors led to accidents. Two types of errors, active failures, and latent conditions, are described in the model. Active failures usually result in immediate effects and effects of latent conditions are triggered by other factors, normally later (Reason, 2004; Reason, 2008). Defenses and safeguards are supposed to prevent the effects of latent conditions, but if they fail to do so, active failures trigger accidents, and latent conditions become the fundament for these accidents. The model is usually illustrated as slices of cheese where each hole represents a failure in a domain; when these failures line up, an accident occurs (see Figure 1):

Swiss Cheese Model.
Figure 1. Swiss Cheese Model (Perneger, 2005).

The advantage of the model is that it recognizes the influence of multiple factors (personal, organizational, and situational) on the accident. Strategic decisions, corporate culture, and workplace issues (insufficient staffing, pressures, etc.) affect the probability of the accident (Jeffs, Berta, Lingard, & Baker, 2012). Its disadvantages are the assumption that latent conditions play a more important role than active failures, when, in reality, active failures are the triggers and can be much more dangerous (Jeffs et al., 2012). The use of the model can lead to shifts in the investigator’s attention to one or several actors (e.g., the worker or the manager) rather than to the complex picture. The causal nature of the model also leads to assumptions that accidents happen due to a linear range of factors.

Systems-Theoretic Accident Model and Processes

The focus of the model is on the system theory; according to it, the socio-technical structure of a system depends on the levels of control exercised by operators (human and automated). Accidents can be caused by “component failures, dysfunctional interactions among components, and unhandled environmental disturbances at a lower level” (Leveson, 2004a, p. 25). Three basic components of STAMP, “constraints, hierarchical levels of control, and process models” help create a “classification of control flaws” that are related to accidents (Leveson, 2004a, p. 26). The focus of STAMP is not events or accidents, but constraints, and their role in safety management (Kazaras, Kirytopoulos, & Rentizelas, 2012; Stukus, 2017). According to Leveson (2004a), accidents are caused by inadequate enforcement of constraints at each level of the system. STAMP focuses on identifying what constraints were violated or why the system was unable to control the enforcement of these constraints (Ouyang, Hong, Yu, & Fei, 2010; Salmon, Cornelissen, & Trotter, 2012). Both social and organizational factors are taken into consideration, as well as software failure and human error that can be influenced by context, models, goals, etc. (Leveson, 2004a).

The advantage of the model is its attention to several variables and interrelations between them (emergent properties, hierarchy, communication and control, and process model). The model emphasizes that high reliability is not always predictive of safety, reliable software can be unsafe, operator error can be produced by the environment he/she/it is in, and systems can migrate toward higher risk but this risk can be mitigated by an appropriate design of systems (Laracy & Leveson, 2007; Leveson, 2004a). Underwood and Waterson (2012) point out that the disadvantages of the model are the efforts that need to be put to use STAMP and often only experienced users can apply it correctly to the organizational environment. Furthermore, managers who want to utilize this model need to be aware of all the specifics of system levels at their organizations to conduct a correct analysis (Qureshi, 2007).

Analysis of the Royal Australian Air Force Deseal/Reseal Incident

The section aims to demonstrate how constraint failures at different levels of control resulted in the described accident. The STAMP model will be used to address safety issues at three levels: maintenance, management, and company. The societal level will also be discussed.

First, it is important to understand how the design of the system resulted in a non-enforcement of constraints in the given case study. At the employees’ or operating level, several constraints were not enforced: protective suits given to employees were semi-permeable to chemicals they used, and employees preferred not to wear suits when it was hot, i.e., the enforcement of constraints (wear protective suits) failed at the lower level. Employees were unable to wear protective suits because the ventilation system worked abruptly. Medics and doctors treated maintenance workers with medication only, even if the latter sought medical assistance after passing out (Leveson, 2004a). Thus, hazards remained unidentified, and there were no appropriate control actions for hazard identification (Leveson, 2004a).

Socio-technical levels of control also failed. The accidents with the workers were due to incorrect modification of the design of the control algorithm (poor quality of PPE, lack of proper medical assistance, lack of a suitable environment for work). Poor quality of PPE, lack of appropriate medical assistance, and problematic environment can be regarded as technical level constraints failures, while workers’ decision to omit safety constraint related to PPE wearing is a social level of control that failed because the context (environment) affected workers’ decision not to wear protective equipment.

Furthermore, Leveson (2004a) also points out that the asynchronous evolution of some parts of the safety-control structure can result in its degradation. When one part of the system changes but others do not, the effects of this change on other levels can be handled inadequately or omitted (Wong, 2004). In the case study, changes in other levels, such as managerial and organizational, resulted in a lack of suitable constraints and process models for the maintenance level. Maintenance workers did not have any specific process models that they could follow (i.e., put on PPE – air conditioning starts working – complete assigned work – report issues with air conditioning). Hopkins (2005) states that none of the health problems and problems with PPE were reported to senior offices through incident reporting systems, which prevented managers from investigating the problem. However, issues at the maintenance level were consequences and not the causes of the overall safety system failure. It will be demonstrated further that non-enforced constraints at this level were caused by failed constraint control at other levels.

The priority of operations over logistics and production pressures were caused by specific organizational culture. The management level thus was interested in meeting production imperatives, which resulted in their inability to address equipment problems that could undermine the Air Force’s goals. The managerial level had little resources for effective management because of cost-cutting and little experience of engineers. Thus, both the lack of the process model (a set of actions that inexperienced managers could follow to execute effective management) and clearly defined constraints resulted in a too broad responsibility at the managerial level, where no room was left for managing maintenance employees and their health issues. Due to the can-do attitude, when the maintenance level disregarded safety procedures, and cost-cutting there were not enough supervisors at the managerial level to execute control over constraints related to protective equipment.

Another problem related to hierarchical levels of control is executive supervisors’ inability to engage in micro-management. Managers were unable to have a direct discussion of issues experienced at the lower level with maintenance workers due to a common managerial strategy that states that only those failures that lower levels cannot handle should be reported to executives. The lack of any action regarding constraint failures at the maintenance level may be observed. It is a result of the lack of the process model for reporting issues and managers’ assumptions that any crucial cases will be reported to them ignoring the lower levels.

The prioritization of platforms to people also resulted in a skewed hierarchical level of control, where no specific effort was made to either address hazards at lower levels or ensure that they do not affect the maintenance level. Instead, process models for handling the hazardous materials were designed with the platforms (rather than people) in mind, and specific emphasis was put by managers on PPE usage.

The last problem was the specific managerial power at the RAAF. The military command system prevented the development of a constraint related to employee empowerment and the subsequent process model that they could use to draw engineers’ or managers’ attention to the problem of the hazardous environment. As no protocol was developed, and employer resistance was prohibited by the military command system, the hierarchy of power and control at RAAF resulted in employees’ inability to protect themselves from such situations.

The major cause of failed constraints at lower levels was an organizational culture of prioritization of air safety to ground safety. No ground safety agency existed at RAAF because air safety was the priority. The design of maintenance processes was developed by a specialist who considered engineering problems, whereas maintenance hazards were ignored. Thus, flaws in the creation process resulted in inconsistencies in the process model, where employees were expected to use any protective equipment given to them, but the effectiveness of the protection of this equipment has was not assessed (Hopkins, 2005). Inadequate feedback from maintenance workers described earlier also contributed to the stability of this culture, where little attention was paid to ground safety because no fundamental process for hazard recognition was developed.

Another missing control action was the lack of a proper reporting system for maintenance workers because due to flaws in communications with management and prioritization of platforms and air safety to people no reporting culture existed. Leveson (2004b) labels such insufficient enforcement of constraints as “inadequate coordination among controllers and decision-makers” (p. 70). Whereas the company was able to create and support the culture of reporting in areas related to air safety but failed to integrate this culture at ground levels.

Specific problems were also related to public concerns that affected organizational culture. The prioritization of air safety was due to society’s concern over the safety of aircraft and the overall bigger media coverage of accidents related to military jets rather than groundworks. As accidents and incidents on the ground were rarely covered by the media, the company as well had no reason to prioritize the safety of maintenance workers over flying officers. Thus, control actions for identified ground hazards remained inappropriate or ineffective, if not lacking.

Comparison to Hopkins’ Analysis

The STAMP model identified the same findings as Hopkins’ analysis did, but it placed a greater emphasis on how the hierarchical level of controls was the actual cause of the incident at the lower level. Hopkins’ analysis emphases issues at various levels, with the majority of them, identified at the organizational level, while the STAMP model demonstrates how issues at higher levels, including design of processes, societal specifics, and organizational culture that is influenced by both previous factors leads to inadequate process models and control actions at subsequent level, eventually causing hazardous behavior and corresponding accident(s) (Hopkins, 2005). The STAMP model also demonstrates that both social (can-do attitude, prioritization of platforms over people) and technical (ineffective equipment, production pressures) aspects of the system made the accident possible (Leveson, Daouk, Dulac, & Marais, 2003).

The lack of adequate coordination and feedback is also recognized by STAMP, where coordination between maintenance and management up to higher-order executives was impossible due to the belief that micro-management was not crucial in this case. The military command system was also an issue that interfered with appropriate feedback between various parts and levels of the system, leading to employees’ inability to report about identified hazards (Hopkins, 2005). The lacking culture of reporting is identified by STAMP as an insufficient process model that maintenance workers were unable to use to report about identified hazards.

References

Hamid, A. R. A., Majid, M. Z. A., & Singh, B. (2008). Causes of accidents at construction sites. Malaysian Journal of Civil Engineering, 20(2), 242-259.

Heinrich, H. W., Petersen, D. C., Roos, N. R., & Hazlett, S. (1980). Industrial accident prevention: A safety management approach. New York, NY: McGraw-Hill Companies.

Hollnagel, E., & Goteman, O. (2004). The functional resonance accident model. In Proceedings of cognitive system engineering in a process plant (pp. 155-161). Linköping, Sweden: University of Linköping.

Hopkins, A. (2005). Safety, culture, and risk: The organizational causes of disasters. Sydney, Australia: CCH Australia.

Jeffs, L., Berta, W., Lingard, L., & Baker, G. R. (2012). Learning from near misses: From quick fixes to closing off the Swiss-cheese holes. BMJ Qual Saf, 21(4), 287-294.

Kazaras, K., Kirytopoulos, K., & Rentizelas, A. (2012). Introducing the STAMP method in road tunnel safety assessment. Safety Science, 50(9), 1806-1817.

Laracy, J. R., & Leveson, N. G. (2007). Apply STAMP to critical infrastructure protection. In 2007 IEEE conference on technologies for homeland security (pp. 215-220). New York, NY: IEEE.

Leveson, N. (2004a). A new accident model for engineering safer systems. Safety Science, 42(4), 237-270.

Leveson, N. G. (2004b). A systems-theoretic approach to safety in software-intensive systems. IEEE Transactions on Dependable and Secure Computing, 1(1), 66-86.

Leveson, N. G., Daouk, M., Dulac, N., & Marais, K. (2003). Applying STAMP in accident analysis. IRIA, 2, 177-198.

Ouyang, M., Hong, L., Yu, M. H., & Fei, Q. (2010). STAMP-based analysis on the railway accident and accident spreading: Taking the China–Jiaoji railway accident for example. Safety Science, 48(5), 544-555.

Perneger, T. V. (2005). The Swiss cheese model of safety incidents: Are their holes in the metaphor?. BMC Health Services Research, 5(1), 71-78.

Qureshi, Z. H. (2007). A review of accident modeling approaches for complex socio-technical systems. In Proceedings of the twelfth Australian workshop on safety-critical systems and software and safety-related programmable systems (pp. 47-59). Canberra, Australia: Australian Computer Society, Inc.

Qureshi, Z. H., Ashraf, M. A., & Amer, Y. (2007). Modeling industrial safety: A sociotechnical systems perspective. In Industrial engineering and engineering management, 2007 IEEE international conference on (pp. 1883-1887). New York, NY: IEEE.

Reason, J. (2004). Beyond the organizational accident: The need for “error wisdom” on the frontline. BMJ Quality & Safety, 13(2), 28-33.

Reason, J. (2008). The human contribution: Unsafe acts, accidents and heroic recoveries. Boca Raton, FL: CRC Press.

Salmon, P. M., Cornelissen, M., & Trotter, M. J. (2012). Systems-based accident analysis methods: A comparison of Accimap, HFACS, and STAMP. Safety Science, 50(4), 1158-1170.

Seo, D. C. (2005). An explicative model of unsafe work behavior. Safety Science, 43(3), 187-211.

Stukus, P. D. (2017). Web.

Underwood, P., & Waterson, P. (2012). A critical review of the STAMP, FRAM and Accimap systemic accident analysis models. In N. Stanton (Ed.), Advances in human aspects of road and rail transportation (pp. 385-394). Boca Raton, FL: CRC Press.

Wong, B. (2004). Web.

More related papers Related Essay Examples
Cite This paper
You're welcome to use this sample in your assignment. Be sure to cite it correctly

Reference

IvyPanda. (2020, December 16). Accident Prevention Models. https://ivypanda.com/essays/accident-prevention-models/

Work Cited

"Accident Prevention Models." IvyPanda, 16 Dec. 2020, ivypanda.com/essays/accident-prevention-models/.

References

IvyPanda. (2020) 'Accident Prevention Models'. 16 December.

References

IvyPanda. 2020. "Accident Prevention Models." December 16, 2020. https://ivypanda.com/essays/accident-prevention-models/.

1. IvyPanda. "Accident Prevention Models." December 16, 2020. https://ivypanda.com/essays/accident-prevention-models/.


Bibliography


IvyPanda. "Accident Prevention Models." December 16, 2020. https://ivypanda.com/essays/accident-prevention-models/.

If, for any reason, you believe that this content should not be published on our website, please request its removal.
Updated:
This academic paper example has been carefully picked, checked and refined by our editorial team.
No AI was involved: only quilified experts contributed.
You are free to use it for the following purposes:
  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment
1 / 1