Updated:

The Aircraft Accidents and Incidents Research Paper

Exclusively available on Available only on IvyPanda® Made by Human No AI

Introduction

This assignment consists of short essays addressing various questions tested on Aircraft Accident and Incident Reporting. The essays explore several aircraft incidents and accidents in which the problems ranged from mechanical failure, organizational failure, and system failures. They aim to examine the inquiry manner to be undertaken to solve issues of a similar nature successfully.

The Space Shuttle Columbia Report

The Role of the ‘NASA Culture’ to the Accident

Accidents in the transport industry are attributed to different factors such as human error, manufacturing defects, weather conditions, and poor design, among other issues. However, human factors have long been identified as one of the leading causes of incidents and accidents in the transportation industry. According to Burban (2016), human factors have become increasingly crucial in air accident investigation and safety improvement. This was evident in the Space Shuttle accident, where the Columbia Accident Investigation Board (CAIB) report attributed the organizational causes of the accident to NASA’s culture (Boin and Fishbacher-Smith, 2011, p. 79). Organizational culture is defined as the fundamental values, beliefs, norms, and practices associated with a specific organization.

According to the Columbia Accident Investigation Board’s Report, a workplace’s organizational culture is characterized by how the employees of the organization conduct their work (Juraku, 2017). The organizational culture of a workplace is powerful, and when new employees join the institution, they are trained to have the culture of the place, as is the case in NASA. The NASA culture is blamed for the Columbia Space Shuttle Challenger accident because the former’s personnel failed to raise concerns about the shuttle’s structural integrity and act on it. NASA’s organizational culture was characterized by reluctance to counter-check and ensure a shuttle’s safety because of the immense success that the organization has achieved (Levy, Pliskin and Ravid, 2010). In the 16 days between the lift-off and re-entry, the NASA employees failed to act on the shuttle. The CAIB Report captures one employee believing that the shuttle was in excellent shape as there were no significant debris and problems that the system identified.

NASA’s culture was the heavy reliance on past successes, and this increased complacency in the organization. The full breadth of organizational practices and the cultural habits that ended up being harmful to Columbia Space Shuttle Challenger’s safety. The staff at NASA failed to conduct sufficient tests to understand why the systems failed to perform adequately in accordance with the specifications. The organizational culture created barriers in NASA that hindered effective communication on critical safety information. It made differences in professional opinion, and this was further championed by the lack of integrated management across varying program elements. The growth evolution of an informal chain of command in NASA encouraged the staff to make decisions that were not in line with the organization’s rules. It created blind spots that the management could have avoided by being more alert considering the shuttle missions’ highly technical and dangerous nature. Doing this would have led to a different outcome and avoided the accident.

Available Information That Could Have Prevented the Accident

In a memo to file cited in the Columbia Accident Investigation Board’s Report, one shuttle engineer indicated displeasure with some of the key management’s decisions. He blamed the management for failing to accede to their request for additional imaging using planes for help from any outside source, available help. He argued that this would impair the engineering team’s possibility of giving high confidence answers on the Columbia Space Shuttle’s safety condition (McDonald, 2013). The information is proof of the bad culture developed in NASA because of past successes, which created overconfidence. The data was indicative of a breakdown in channels of communication or, at the very least, a reluctance to accept information contrary to held presumptions. This meant that concerns of key members of the NASA team such as engineers got shelved instead of being bases for inquiry and corrective action. Instead, this information was not acted in accordance with as required of such an important mission. The data also gestured to the complacency to challenge the narratives on the Columbia Space Shuttle mission’s technical aspects.

Given previous experience, it was convenient for management to downplay the importance of additional data that could have changed the Columbia Space Shuttle trajectory. Such a move was considered by the management as unnecessary and could potentially cause a delay in launch. The burden seemed to have been reversed with the engineers required to prove the shuttle was unsafe. It is reminiscent of the Challenger incident 17 years earlier, where the key management was skeptical of the engineer’s concerns that were relevant in terms of safety suggestions. The concerns included the need for images of the plane’s outer left wing to analyze possible harm and failed to listen to engineers’ requests about checking the left wing’s status.

The NASA culture and its decision-making process are also responsible for the Space Shuttle Challenger disaster. The organization violated its safety rules as the NASA managers were aware since 1977 that the design of the solid rocket boosters (SRB) by contractor Morton-Thiokol had potentially catastrophic flaws in the O-rings. The NASA managers were aware of these flaws, yet they failed to address the problem, which happened 9 years with the Space Shuttle Challenger. The managers also disregarded the engineers’ warning concerning the dangers that the launch posed because of the low temperatures that morning. The management also failed to adequately inform their superiors of the technical concerns, and this is evidence of the breakdown in communication which had become part of the NASA culture. The culture at NASA was always to keep quiet with any flaws and ensure that the launch of the space shuttle was never postponed because of minor issues. The same culture affected the Columbia Space Shuttle program.

British Airways Flight 268

Crew’s Decisions from the Time of the Fire to the Eventual Landing

After one of the engines of the British Airways Flight 268 caught fire sometime just after takeoff, the pilot decided to stay on the cause, saying he would try to get as far as possible just in case of anything. The flight controllers at the Los Angeles airport expected the pilot to turn back and land in order to have the plane checked. Nevertheless, after taking advice from the British Airways operations base, the plane continued its journey to London. The aircraft had 369 persons on board, and it was running three engines after having lost one. The crew’s decision to fly even after losing one engine saved the Airline close to one thousand Euros which is usually paid as compensation to passengers whenever there is a delay of more than five hours.

Whether the F.A.A Should Have Charged the Captain

At the heart of this inquiry is whether, in this case, the responsibility of aircraft safety and the incident response was placed in the proper authority. This, in turn, is a question of who gets to regulate the safety standards within their domestic airspace (Petrescu, 2020). In the case of the British Airways Flight 268, the U.K.’s Air Accidents Investigation Branch, in its report, held the position that pilots of British Airways were required to follow the United Kingdom’s aviation rules alone, a position the Federal Aviation Authority disagreed with.

The problem with this is in the second regard, regulation. In this case, U.S. law requires the pilot and the co-pilot to land that aircraft. On the other hand, there is a procedure in the flight operations handbook that allows the pilot to continue to his/her destination even with an engine out. This conflict is one that can only be resolved by international agreements as to what should happen in such situations (Li et al., 2018). This essay has to note that this was the approach taken in this situation. British Airways settled the matter out of court on the condition that it would not dispute U.S. blow-out procedures in future aviation incidents. In light of the foregoing, the pilot cannot be blamed for following the country’s procedures that bind him in his service.

ValuJet 592

How Rapid Growth Affects Aircraft Safety

ValuJet was started in 1993 as a low-budget airline that did not offer any fancy approach to its clientele; three years later, ValuJet had grown to the extent of serving passengers in cities across the United States. The Airline made use of the old aircraft uniquely designed for short flight purposes to continue providing its low-cost kind of service. This raised questions as to whether the Airline cut costs to an extreme to maximize profit while risking the passengers’ safety using their Airline.

The Federal Aviation Administration overlooked some compulsory procedures to every other Airline because they still considered ValuJet as a startup airline. The company took advantage and cut costs wherever they possibly could for them to maintain their low budget fare and effectively run a more profitable business (Duval, Robinson and Pearce, 2000). This largely contributed to their fast and steady rise; the company started with only two aircraft. It grew to serve approximately thirty-one destinations in the Midwest and the southeast of the United States.

By the time the Airline was celebrating its second anniversary, the company had announced that it had placed a billion dollars order for fifty planes with an option to buy fifty more planes. ValuJet seemed as they were destined to succeed in the aviation industry until one of their aircraft crashed into Florida Everglades, taking the lives of all a hundred and ten passengers aboard the plane. The crash caused a round concern if ValuJet was at all a safe airline to use or whether the F.A.A. was still reliable when creating a safe, proper operating environment for air travel. ValuJet’s airline plane crash caused the purchase order that had been placed to be sidelined.

Once a business becomes profitable and successful, the next thing to do is grow. Rapid growth comes with some potential dangers and liabilities that may destabilize a company causing it to collapse as quickly as it started or even cause harm either physically, mentally, or financially to those associated with it. Rapid growth for a company means they are profit-oriented, which means that they are willing to do everything and anything to keep making that profit. This means acquiring cheap labor, materials, and anything to keep growing. In this case, the owners of ValuJet decided to use old planes that were meant to travel short distances instead of investing in brand new aircrafts for their Airline.

One of their airlines crashed, killing one hundred and ten passengers on board, causing physical and emotional pain to those associated with the Airline. After the crash, the Airline announced it was slowing down its growth, saying that it needed to improve its maintenance and flight operations programs. The announcement came after the F.A.A. stated that ValuJet Airlines must gain their approval before they further expanded on its route structure or fleets.

Response to Invitation to Move on to Other Matters and Not Pursue Growth

It is important to map the reasons for rapid growth in airlines and any other business for that matter, with or without extended history in the airline industry. As evidenced by the safety record history of ValuJet Airlines, the growth they experienced resulted from compromising safety standards while maximizing profit. The inability of ValuJet to properly oversight the safety standards being employed at its maintenance contractor Sabretech was evidence of just how deep the compromise had gotten in the company. Further, there is a general coziness between airline regulator F.A.A. and airlines regarding cost, giving an example of how a cost-inducing regulation that saved very few lives statistically would not be pushed for or required by the F.A.A. (Lawrenson and Braithwaite, 2018). The ValuJet incident outlines why it is important to constantly ensure that airlines are not out to profit at the expense of their customer’s safety.

Alaska Airlines 261

Analysis of CVR Data and The Crews Actions

On January 31st, 2000, Alaska Airlines 261 crashed into the Pacific Ocean, killing everyone on board. Visual meteorological conditions had become severe to operate in. From the CVR data, the National Transportation Safety Board (NTSB) speculated that the probable cause for the fatal accident was the plane losing pitch control due to in-flight failure of the horizontal stabilizer mechanism of the screw jack. The loss was caused by excessive wear and tear caused by inadequate lubrication of the screw jack assembly. The CVR data indicated that before the accident, the flight crew contacted the maintenance personnel and Alaska Airlines dispatch at sea and informed them about the jammed stabilizer and a possible emergency landing at Los Angeles International Airport in California.

The CVR information revealed that the plane went into a vertical nosedive, but the crew did everything in their power to regain control of the falling aircraft, which they eventually did (Cookson, 2019). The plane stabilized and slowed down because of the crew’s efforts; it was then instructed to descend and prepare for landing at the Los Angeles International Airport. However, the plane’s flaps were deployed due to the plane’s screw jack’s unexpected failure. Despite the crew’s effort to save the plane from crashing, it hit the Pacific Ocean near California.

Analysis of the F.A.A.’s Oversight of Alaska Airlines

Two years before Alaska Airline Flight 261 crashed in the Pacific Ocean due to a failed Jackscrew, the Federal Aviation Administration had been confronted after many indications that the Airline was heading into serious trouble. The F.A.A. ignorance invited scrutiny from the NTSB, preinductions that there was trouble ahead included. Key maintenance and safety positions remained vacant for a long time, and a criminal investigation, 13-months, into claims of falsifying maintenance records.

The Federal Aviation Administration was said to have been considering allowing a joint review with Alaska Airlines. This would have let the Airline participate in inspecting its aircraft, which meant allowing them to get away with civil penalties if any maintenance violations were found (Larsen, 2020). The administration’s inspectors had not had enough training to evaluate or document inspection results to give room for trend analysis or to develop targeting inspections. The F.A.A. had allowed Alaska Airlines to extend the period between jack-screw lubrication by over four times without supporting documentation, meaning that the lubricant wore out faster and, by extension, wore out the threads on the acme nut, causing its failure. Alaska Airline Flight 261’s screw jack mechanism was later recovered, but it had been tampered with by the wreckage metals. Therefore, the Federal Aviation Administration ordered a probe into over a thousand aircraft while ordering some airlines to replace the screw jacks on their aircrafts, but then it was too late.

Recommendations for The F.A.A.

It is recommended that the Federal Aviation Administration should fully implement the Air Transportation Oversight System (ATOS). The system designed to use the data provided to help identify occurring trends and pinpoint problems that are likely to cause accidents or minor incidents. ATOS will help the Federal Aviation safety inspection officers to focus on the cause of possible problems and their possible solutions, more importantly, it also will help save time since the Federal Aviation Administration does not have enough safety inspectors to conduct a physical inspection of each aircraft.

The F.A.A. has to correct problems that are persistent in the oversight process. These issues revolve around data collection, inspectors training, safety data use, and following up on safety problems identified previously (Grizzle, Warren, and Seiden, 2016). The administration must also be vigilant and ensure that all the identified issues are corrected in a timely manner to avoid a disaster before it happens. These recommendations will be helpful in ensuring that another tragedy such as that of the Alaska Airline flight 261 never happens again.

Japan Airlines Boeing 787-8 JA829J

The Role of the Manufacturing Process for the 787

The B787 program was not designed to manage unexpected challenges from the suppliers with the manufacturing environment that was new at that time. In addition, Boeing Airline did not intercede in the process early to assist their supplier. During the B787 development stage, the supply chain experienced a learning curve in that the company’s suppliers uncovered ways to work with the interface in their new process (Baker et al., 2014). Some late engineering alterations during the construction process of the B787 greatly affected the suppliers’ ability to deliver on Boeing’s requested components for the production process (Herkert, Borenstein, and Miller, 2020). Without continued support from Boeing, suppliers had problems when it came to fulfilling their schedule commitments while attempting to incorporate the late proposed engineering changes (Pandian et al., 2020). Miscommunications and misunderstanding in the initial building phase led to incomplete airplane components shipped to Boeing, who assembled them.

Recommendations to the F.A.A.

The Federal Aviation Administration has a long way to go to tighten its grip on flight inspection after witnessing one of the deadliest plane crashes in history two years ago when the 737 Max crashed (Nunn, 2020). Following the report, investigations were conducted to pinpoint the cause of the fatal crash. Systematic problems were uncovered, yet the F.A.A. approved them. The F.A.A. needs to have a more comprehensive approach to certification and its safety oversight on their manufacturers. The F.A.A. should concentrate on funding more technical stuff and ensure that the staff is more focused on the bigger risks posed by the system.

Conclusion

The sum of the discussion above is that aircraft incident and accident investigation should be comprehensive enough. There is a need to factor organizational factors in isolation or as workplace culture, as was in the NASA Space Shuttle and ValuJet. Workplace culture, especially in managerial decision-making, is an important factor in ensuring safety in the aviation industry. Furthermore, mechanical failures such as those of the Japanese Boeing 787 batteries need to be factored in. Alongside them, it is important to factor how the application of countries’ laws different from the country’s laws where the Airline is domiciled, as was the case in the British Airways flight, can also be a trigger for international airline incidents. It highlights the need for eternal vigilance in all aspects of airline safety to make air travel as safe as possible.

References

Baker, D.D. et al. (2014) Boeing 787-8 design, certification, and manufacturing systems review. Web.

Boin, A. and Fishbacher-Smith, D. (2011) ‘The importance of failure theories in assessing crisis management: the Columbia space shuttle disaster revisited’, Policy and Society, 30(2), pp. 77-87. Web.

Burban, C. (2016) Human factors in air accident investigation: a training needs analysis. PhD thesis. Cranfield University.

Cookson, S. (2019) ‘Overwritten or unrecorded: a study of accidents & incidents in which CVR data were not available’, in Stanton, N. (eds.) International conference on applied human factors and ergonomics. Cham: Springer, pp. 702-714. Web.

Duval, C., Robinson, R. and Pearce II, J.A. (2000) , Journal of the International Academy for Case Studies, 6(2), pp. 84-100. Web.

Grizzle, D., Warren, M. and Seiden, S. (2016) ‘F.A.A.’s move to performance-based oversight: developments, challenges, and shifting legal landscapes’, Air & Space Law, 29, p. 1.

Herkert, J., Borenstein, J. and Miller, K. (2020) ‘The Boeing 737 MAX: lessons for engineering ethics’, Science and Engineering Ethics, 26(6), pp. 2957-2974. Web.

Juraku, K. (2017) ‘Why is it so difficult to learn from accidents?’ In Ahn J., Guarnieri F., Furuta K. (eds) Resilience: a new paradigm of nuclear safety. Cham: Springer, pp. 157-168. Web.

Larsen, A.A. (2020) Alaska part 135 operations: the need for additional regulatory oversight and continuous aircraft tracking. Web.

Lawrenson, A.J. and Braithwaite, G.R. (2018) ‘Regulation or criminalization: what determines legal standards of safety culture in commercial aviation?’ Safety Science, 102, pp. 251-262. Web.

Levy, M., Pliskin, N. and Ravid, G. (2010) ‘Studying decision processes via a knowledge management lens: the Columbia space shuttle case’, Decision Support Systems, 48(4), pp. 559-567. Web.

Li, Y. et al. (2018) ‘The influence of self-efficacy on human error in airline pilots: the mediating effect of work engagement and the moderating effect of flight experience’, Current Psychology, 40, pp. 1-12. Web.

McDonald, A., 2013. Lessons learned but forgotten from the Space Shuttle Challenger Accident. In Space 2004 Conference and Exhibit (p. 5830). Web.

Nunn, D.H. (2020) ‘Grounded: how the 737 MAX crashes highlight issues with F.A.A. delegation and a potential remedy in the Federal Tort Claims Act’, Journal of Air Law and Commerce, 85(4), 703.

Pandian, G. et al.(2020) ‘Data-driven reliability analysis of Boeing 787 Dreamliner’, Chinese Journal of Aeronautics, 33(7), pp. 1969-1979. Web.

Petrescu, R.V.V. (2020) ‘British airways is ordering up to 42 Boeing 777-9s Aeronaves to modernize the U.K. flag carriers long-haul fleet’, Journal of Aircraft and Spacecraft Technology, 4(1), pp. 1-20. Web.

Wallace, R.J. et al. (2018) ‘Evaluating methods of F.A.A. regulatory compliance for educational use of unmanned aircraft systems (U.A.S.)’, The Collegiate Aviation Review International, 35(1), pp. 25-51. Web.

More related papers Related Essay Examples
Cite This paper
You're welcome to use this sample in your assignment. Be sure to cite it correctly

Reference

IvyPanda. (2022, November 3). The Aircraft Accidents and Incidents. https://ivypanda.com/essays/the-aircraft-accidents-and-incidents/

Work Cited

"The Aircraft Accidents and Incidents." IvyPanda, 3 Nov. 2022, ivypanda.com/essays/the-aircraft-accidents-and-incidents/.

References

IvyPanda. (2022) 'The Aircraft Accidents and Incidents'. 3 November.

References

IvyPanda. 2022. "The Aircraft Accidents and Incidents." November 3, 2022. https://ivypanda.com/essays/the-aircraft-accidents-and-incidents/.

1. IvyPanda. "The Aircraft Accidents and Incidents." November 3, 2022. https://ivypanda.com/essays/the-aircraft-accidents-and-incidents/.


Bibliography


IvyPanda. "The Aircraft Accidents and Incidents." November 3, 2022. https://ivypanda.com/essays/the-aircraft-accidents-and-incidents/.

If, for any reason, you believe that this content should not be published on our website, please request its removal.
Updated:
This academic paper example has been carefully picked, checked and refined by our editorial team.
No AI was involved: only quilified experts contributed.
You are free to use it for the following purposes:
  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment
1 / 1