Tesla Model S Accident: Risk Management Analysis Essay

Exclusively available on IvyPanda Available only on IvyPanda

Introduction

New technological innovations offer many opportunities for businesses to improve their products or services. However, they also entail various technology risks that have to be managed to ensure the safety of users and prevent damage to the company, whether reputational or financial. Consequently, studying cases where new technology risks were not managed adequately helps to identify practice gaps and develop strategies that could help to prevent future failures. The present paper focuses on Tesla’s autopilot technology, which has been blamed for lethal traffic accidents. As the analysis carried out in this paper will show, the company failed to manage the technology risk and should improve its practices in the future to avoid similar incidents.

We will write a custom essay on your topic a custom Essay on Tesla Model S Accident: Risk Management Analysis
808 writers online

Background

Tesla is an electric car manufacturer that is famous around the world for its designs and innovations. The company was founded in 2003 by Martin Eberhard and Marc Tarpenning, who were both engineers (Reed, 2020). At the time, electric cars were still new to the market; however, General Motors got positive results during its market experiment with electric vehicles, which inspired the two engineers to start their company (Reed, 2020). In 2004, Elon Musk invested $30 million into Tesla, which made him the chair of the Board of Directors (Reed, 2020). Tesla was innovative in many ways, and this helped the company to achieve popularity both in the United States and in other countries. Firstly, the company was the first to produce a fully electric car that could meet consumers’ needs. As explained by Reed (2020), “previous experiments in this field had failed because, among other issues, companies struggled to produce a battery powerful enough to keep cars on the road and a cost-effective motor that could fit inside a consumer vehicle and accelerate it to highway speed” (para. 8). Secondly, Tesla was able to improve the battery of its cars over time, leading to reduced charging times (Reed, 2020).

Another critical innovation implemented by Tesla was autopilot. The technology is described as “an advanced driver assistance system that enhances safety and convenience behind the wheel” (Tesla, 2020, para. 1). The company warns that the features currently included in their cars do not make them fully autonomous, and thus drivers should still keep their hands on the wheel and be ready to take over at any moment (Tesla, 2020). Nevertheless, this innovation is crucial in the contemporary world because it enabled the automobile business to step closer to creating fully autonomous vehicles.

Autonomous Cars Risk

Cars with autopilot features are among the examples of technology that has many benefits while posing potential risks to the health and safety of users. On the one hand, driverless cars are believed to contribute to road security because they could reduce the risk of traffic accidents. According to McDonald (2013), many car accidents occur due to the drivers’ mistakes, whether it is distracted driving or driving under the influence. Therefore, it is believed that autonomous cars will help to prevent accidents stemming from human error. For drivers, autonomous vehicles are highly beneficial because they enable people to drive larger distances without feeling tired or use their daily commute time to do other things. For car manufacturers, driverless cars are a crucial market opportunity because they could be sold at higher prices, thus bringing more revenues.

Nevertheless, there are also various risks associated with autonomous vehicles. For example, Demmel et al. (2019) state that, like many other technologies, automated driving features can malfunction, resulting in sensor, hardware, or actuator failures. Because drivers rely on autopilot to work perfectly, they do not anticipate these situations and engage in non-driving tasks like reading, playing games on their smartphones, or even sleeping (Demmel et al., 2019). Consequently, in case of a malfunction or a sudden change in the traffic environment, drivers may not react as quickly as needed and prevent the collision (Demmel et al., 2019). This means that autonomous vehicles require rigorous risk management to protect the safety of drivers and passengers while also shielding manufacturers against liability.

When it comes to autonomous vehicles, risk management is complicated by the increased number of actors and their responsibilities. For instance, as explained by Thom (2015), the mechanism by which autonomous driving is carried out “involves governmental authorities supplying a constant stream of data about roadway and traffic conditions and facilitating the broadband radio communications that inform individual vehicles of their traveling neighbors’ behaviors on the roadway” (para. 11). The signals are then interpreted by onboard technology in automated decision-making. Due to a large number of interfaces and instruments involved in driving an autonomous car, risk management consists in evaluating and controlling a wide array of risks, from network malfunctions and system glitches to sudden changes in road conditions.

Tesla Model S Crash

Based on the information above, it is far from surprising that Tesla’s vehicles featuring autopilot technologies were involved in several accidents over the years. The first fatal crash involving Tesla Model S in autopilot mode occurred in Florida in May 2016 and killed the driver named Joshua Brown. According to McFarland (2016), “as a tractor-trailer turned in front of Brown, his Tesla plowed ahead — its roof striking the underside of the trailer and passing underneath it [and] continued to drive, striking two separate fences and a utility pole before finally stopping” (para. 1). Not only did the technology fail to recognize another vehicle on the road – it was unable to stop even after the collision had already happened. The story resonated throughout the United States, making people wary of using the feature.

1 hour!
The minimum time our certified writers need to deliver a 100% original paper

The evaluation of the incident showed that Tesla’s technologies were indeed at fault. It is reported that Tesla’s autopilot system is prone to such crashes due to its technical shortcomings (McFarland, 2016). Indeed, future cases involving Tesla cars in autopilot followed similar scenarios, where the cars failed to recognize a significant obstacle on the road (BBC News, 2018; Krisher, 2020). Based on the similarities that exist between crashes involving different models and the content of the discussions related to the 2016 crash, it is evident that the company was aware of the gap in its system, but failed to address it sufficiently. Furthermore, the company also did not respond to the recommendations provided by the National Transportation Safety Board in 2017 following the Model S crash, which could have helped the company to improve the safety of drivers and passengers while using autopilot (Chokshi, 2020). At the moment, the pressure on Tesla to improve its technologies is high, but it is unclear whether or not the company will be able to do so in the foreseeable future (Chokshi, 2020; Heilweil, 2020). Further sections of the paper will analyze the 2016 crash and study stakeholders’ perspectives to identify how they could have contributed to or prevented the accident.

Case Analysis

To understand the case in-depth, it is essential to identify why the current risk management efforts of Tesla failed to prevent the accident. Following the traditional five-step model of risk management, which includes risk identification, analysis, evaluation, treatment, and monitoring, could help to identify the stages at which the company’s actions contributed to the risk of the accident. From the information on the accident presented above, it appears that the primary steps contributing to the risk were risk analysis, evaluation, and treatment.

Firstly, it appears that the company has failed to identify the risk associated with drivers’ behavior while operating a vehicle in autopilot mode. According to research on such accidents, a significant share of them occurs due to both system malfunction and the inability of the driver to react quickly and take control of the situation (Banks, Stanton and Plant, 2019; Casner, Hutchins and Norman, 2016; Demmel et al., 2019). While the company accepted the residual risk of a technology malfunction, it failed to account for drivers’ response to such malfunctions, which would have been essential in preventing the accident. Focusing more on drivers’ perceptions of risk during the analysis would have allowed the company to understand that, despite warnings and claims that the autopilot feature still requires drivers’ attention on the road, excitement with technology reduces drivers’ risk perceptions (Glendon and Clarke, 2016). Consequently, people are more likely to engage in non-driving activities and get distracted, and the steps taken by the company to warn them were inadequate.

Secondly, the company’s evaluation of the risk that was identified successfully – system malfunction – was also insufficient. On the one hand, the company did not fully explore the factors that could contribute to the outcomes of technology malfunctions, including system recovery times, driver reactions, traffic operations, and more (Demmel et al., 2019; Dixit, Chand and Nair, 2016; Jeong, Oh and Lee, 2017). On the other hand, the company also underestimated the importance of this risk, which is likely the reason why it ignored the residual risk and the subsequent recommendations of the authorities on improving the system, letting crashes continue instead.

Finally, the company’s control measures were also not appropriate for the level of risk that that was involved in the case. According to reports, the company was aware of the blind spots in the system but opted not to address them or notify the drivers of the potential danger (McFarland, 2016). In other words, the company treated the risk as residual but took limited steps to control it in real-life circumstances. This approach proved to be wrong because the technologies used by competitors did not show the same performance issues, and thus Tesla had the technical opportunity to improve its system (Chokshi, 2020; Heilweil, 2020; McFarland, 2016). Furthermore, even if the company opted for accepting the risk, more information about it should have been disclosed to drivers, which would prevent them from trusting the technology entirely. Combined with other gaps in risk management, this shortcoming affected the outcomes of the accident.

Contribution of Stakeholders

The key stakeholders involved in the situation were Tesla’s management, engineers, and the driver. Each of these stakeholders contributed to the 2016 accident in different ways. The role of Tesla’s management is, perhaps, the most important, as it positioned the technology in a way that suggested it was error-proof and automated despite understanding its gaps. In contrast, other manufacturers who found similar gaps in their systems chose not to release them or branded them as semi-autonomous driving support systems (Dave, 2016). Additionally, the engineers could have contributed to the accident since they conducted tests and ruled that the system was ready to be used by the public. Finally, the driver was also involved because he overestimated the capabilities of Tesla’s technology, which resulted in him losing caution while driving and not noticing the truck when it turned in front of his car.

Hence, each stakeholder could have acted differently, and it would have helped to prevent the accident. Tesla’s management, for example, could postpone the release of the feature until the risk of the malfunction was minimal or use marketing to let customers know that the car is not fully autonomous and that drivers should never take their eyes off the road, even when the feature is enabled. Similarly, the engineers working on this feature could have altered the values required to pass testing to minimize potential risks or warned the management about possible malfunctions. The driver could also contribute to the prevention by staying focused on the road and keeping his hands on the wheel to allow for a quick response.

Remember! This is just a sample
You can get your custom paper by one of our expert writers

Conclusion

Overall, the 2016 accident involving Tesla Model S highlights how inadequate management of technology risk can endanger users’ health and safety. Based on the analysis of the case, Tesla failed to account for drivers’ risk perceptions while driving an automated vehicle, which led to the accident. The various stakeholders included in the case could have prevented the accident, but instead, their actions contributed to it. Enhancing the practice of risk management will help Tesla to avoid similar issues in the future.

Reference List

Banks, V. A., Stanton, N. A. and Plant, K. L. (2019) ‘Who is in responsible for automated driving? A macro-level insight into automated driving in the United Kingdom using the Risk Management Framework and Social Network Analysis,’ Applied Ergonomics, 81, p. 102904.

BBC News (2018) BBC News US & Canada, Web.

Casner, S. M., Hutchins, E. L. and Norman, D. (2016) ‘The challenges of partially automated driving,’ Communications of the ACM, 59(5), pp. 70-77.

Chockshi, N. (2020) The New York Times, Web.

Dave, P. (2016) Los Angeles Times, Web.

Demmel, S. et al. (2019) ‘Global risk assessment in an autonomous driving context: impact on both the car and the driver,’ IFAC PapersOnLine, 51(34), pp. 390-395.

Dixit, V. V., Chand, S. and Nair, D. J. (2016) ‘Autonomous vehicles: disengagements, accidents and reaction times,’ PLoS one, 11(12), e0168054.

We will write
a custom essay
specifically for you
Get your first paper with
15% OFF

Glendon, A. I. and Clarke, S. G. (2016) Human safety and risk management: a psychological perspective. 3rd edn. Boca Raton, FL: CRC Press.

Heilweil, R. (2020) Vox, Web.

Jeong, E., Oh, C. and Lee, S. (2017) ‘Is vehicle automation enough to prevent crashes? Role of traffic operations in automated driving environments for traffic safety,’ Accident Analysis & Prevention, 104, pp. 115-124.

Krisher, T. (2020)AP News, Web.

McDonald, C. (2013) ‘Driverless cars: a risky opportunity?’ Risk Management Magazine. Web.

McFarland, M. (2016) ,’ CNN Business, Web.

Reed, E. (2020) . Web.

Tesla. (2020). Web.

Thom, P. R. (2015) Risk Management Magazine. Web.

Print
Need an custom research paper on Tesla Model S Accident: Risk Management Analysis written from scratch by a professional specifically for you?
808 writers online
Cite This paper
Select a referencing style:

Reference

IvyPanda. (2022, March 1). Tesla Model S Accident: Risk Management Analysis. https://ivypanda.com/essays/tesla-model-s-accident-risk-management-analysis/

Work Cited

"Tesla Model S Accident: Risk Management Analysis." IvyPanda, 1 Mar. 2022, ivypanda.com/essays/tesla-model-s-accident-risk-management-analysis/.

References

IvyPanda. (2022) 'Tesla Model S Accident: Risk Management Analysis'. 1 March.

References

IvyPanda. 2022. "Tesla Model S Accident: Risk Management Analysis." March 1, 2022. https://ivypanda.com/essays/tesla-model-s-accident-risk-management-analysis/.

1. IvyPanda. "Tesla Model S Accident: Risk Management Analysis." March 1, 2022. https://ivypanda.com/essays/tesla-model-s-accident-risk-management-analysis/.


Bibliography


IvyPanda. "Tesla Model S Accident: Risk Management Analysis." March 1, 2022. https://ivypanda.com/essays/tesla-model-s-accident-risk-management-analysis/.

Powered by CiteTotal, free essay bibliography maker
If you are the copyright owner of this paper and no longer wish to have your work published on IvyPanda. Request the removal
More related papers
Cite
Print
1 / 1