Ethical Decision Making and Information Technology Case Study

Exclusively available on IvyPanda Available only on IvyPanda

Abstract

Recent technological advancement in the informational technology (IT) field has resulted in more unethical behaviors from individuals who can quickly gain access and use these technologies. There have been increased concerns for ethical decision-making when faced with conflicts, quandaries especially in the IT industry, which appear to be overlooked. This paper explores the ethical decision-making done by a self-driving car during an unavoidable crash. The study will discuss how consequentialism and Kantianism can be used to make ethical decisions when a self-driving car crashes. Security concerns regarding hacker threats have also been covered. Recommendations for effective, ethical decision-making in driverless vehicles are the use of natural language to interpret the rules which determine the actions of the vehicles before and during crashes, and rational ethics.

We will write a custom essay on your topic a custom Case Study on Ethical Decision Making and Information Technology
808 writers online

Keywords: Ethics, ethical decision making, Kantianism, consequentialism, self-driving car, autonomous vehicle.

Introduction

The rapid development in information technology has resulted in remarkable benefits to people and businesses across the world. However, with ease of access to technology and widespread availability of the internet, there has been a surge in unethical behavior related to their use, which has hindered individuals from experiencing the full benefits. Some of the unethical behavior include hacking and the use of Trojans to compromise computer systems. The self-driving car is one development in technology that has raised many concerns in so far as its use, ethics, and decision making during accidents are concerned. By definition, a self-driving car is operated without human guidance but instead controlled and navigated by computer software under several driving situations and conditions.

Examples of such vehicles have been developed by Google’s Waymo, Tesla, General Motors, and Honda. Cars of that nature are known by other terms such as ‘driverless,’ ‘robotic,’ or ‘autonomous’ cars (Fleetwood, 2017). According to an article by Fleetwood (2017), “nothing portends a more significant reduction in morbidity and mortality rates from motor vehicle accidents than autonomous vehicles” (p. 534). Although these automobiles have their benefits which people can reckon with, there have been growing concerns on the ethical component of the design. This paper seeks to establish how self-driving vehicles make ethical decision-making during a crash based on existing moral codes of conduct.

Case Study Presentation: Self-driving Car Crash

Background

What makes individuals prefer autonomous cars to human-driven ones is the assumption that the former is safer. The group believes that a manual vehicle is more likely to cause an accident than a driverless one. A report by Marshall and Davies (2018) supports the claim by stating that since 2009 when Google, in partnership with Waymo, began the self-driving program, their cars have been involved in 30 minor accidents in 2016. Usually, the autonomous vehicle (AV) has a test driver to intervene in avoiding risky situations. It is said that the AV can travel up to 50,000 miles without human assistance.

Hacking Threats in Self-Driving Cars

Self-driving cars can also be hacked like any other computer system. Studies such as Zmud and Sener (2017) noted that hacking is becoming a major concern in AVs. Hackers can gain unauthorized access into the AV’s computer system for malicious reasons. They can infect the computer with Trojans such as viruses, which may affect the car’s database. Once the database security has been compromised, the AV may malfunction in the way it responds to commands. In a conference paper by Szikora and Madarasz (2017), it was found that there have been instances where terrorists hacked and directed AVs into a crowd which resulted in fatalities and injuries. According to Kohl et al. (2018), “these hacking attacks could cause financial and physical harm and even death to car passengers and other road users, which is certainly more severe than having a personal computer hacked” (p. 635). These cyber-attacks have made people fear for their privacy and security. Designers of AVs should be wary of hackers and resort to measures which could help in averting the attacks.

Government Intervention

The government has a responsibility of providing safety for their subjects. Having established how AVs pose a security threat to pedestrians and passengers. In an article by Duncan (2020), the United States federal government has let private companies design and test AV with little interference, a move which has been highly criticized. The same source quoted the Transportation Secretary, saying “The federal government is all in for safer, better and more inclusive transportation, aided by automated driving systems” (para. 2). The leader stated that the government established goals which would help in enhancing safety, security, and the quality of life in so far as the using AVs is concerned. In research by Roth (2019), the US government has been enacting regulations guiding the self-driving industry. The source underscores the government’s responsibility of assuring the citizens of the viability and safety of the automobile. Different states have laws which allow or prohibit testing and use of AVs.

1 hour!
The minimum time our certified writers need to deliver a 100% original paper

Potential Causes of Crash

AV cannot be viewed as outstandingly safer than humans from empirical evidence. A crash may result from hardware or software failures since the AV is controlled by computer software. In a report by Harris (2017), 2,578 failures (hardware and software) were recorded from nine companies, with Google’s Waymo registering 124 disconnections across 60 cars. Although engineering systems can fail, it is worth recognizing that avoiding every collision may be elusive for even a perfectly-working system. However, AV for sale needs several redundancies, rigorous testing, and regular maintenance to reduce failure, which would otherwise cause accidents.

There are scenarios when crashes are inevitable, even for an AV with minimal reaction time and perfect mastery of the traffic. Imagine a scenario where an AV has been stopped by a traffic signal and is surrounded by other cars on all sides. If it is approached from behind by a stray vehicle, it will likely be involved in a minor collision even if it maneuverers away.

Decision Making During a Crash

Present-day drivers in hazardous situations must make decisions to prevent collisions, and if a crash is inevitable, to collide as securely as possible. Usually, the decision to avoid a collision is made fast under extreme anxiety, with little time to think or plan an action. An AV relies on a human driver to control it in the event of a system failure. When the car encounters a scenario, it cannot comprehend, for example, a road construction activity. However, anticipating that a human driver will always be alert to prevent crashes is unrealistic since the drivers tend to be preoccupied with other activities such as reading or sleeping even though the vehicles have mechanisms of counteracting distractions.

Ethics of Inevitable Crashing

Due to intense pressure and anxiety, human drivers are often predisposed to make wrong decisions before and during collisions. They have to overcome time factors, inadequate experience in handling the AV, and limited vision. Since contemporary AV inadequate sensing and processing ability, it is inherent to consider ethical decision-making in the event of an accident. An advanced AV can make pre-event decisions in a crash using specialized software and sensor devices that can precisely detect close vehicle routes and perform quick evasion maneuvers, hence defeating human drivers’ shortcomings. If a collision is inescapable, a computer program can speedily determine the most optimal way of crashing regarding safety, probability of results, and certainty of calculations, more quickly and precisely than a human driver. For instance, the software may direct the car in a high way to engage the brakes and do a swerve in a shifty way.

One disadvantage with AV during crashes is that they make decisions based on the programmed logic in the computer software, whereas human drivers make decisions in real-time during accidents. If the collision is avoidable, then the drawback is not a big problem. However, if AV crashes and injuries are inevitable, the vehicles must choose the safest way to crash. At this point, the car is expected to decide with a moral dimension.

Findings

Consequentialism Approach

The Consequentialism theory is viewed as a more rational decision-making method than others, such as Asimov’s laws. In this approach, an AV decides on a crash path, which lessens universal harm or damage. One typical example of this theory is utilitarianism which advocates for maximum happiness for most when deciding on actions. Based on this approach, it is morally unacceptable to cause a collision of any kind.

Assuming a crash’s possibilities are property damage or human injury, according to this theory, it is imperative to spare life and instead destroy property. However, decision-making using this approach has some shortcomings. Since it is a matter of calculating the value of one action relative to another, incidences of unfairness and discrimination can arise. For example, if a car decides to collide with the less expensive vehicle of two, it depicts unfairness to the cheap automobile, and they can feel targeted. The idea that the computer software’s logic determines these choices is not excusable since humans developed the program.

Remember! This is just a sample
You can get your custom paper by one of our expert writers

Another disadvantage of consequentialism is choosing which aspects to include in the decision and what to exempt. For instance, vehicle-to-vehicle compatibility evaluates and compares the damage expected by colliding with two different vehicles (Beggin, 2021). Using their creative abilities, an AV can opt to collide with a more compatible vehicle unavoidably. Although the harm is minimized as much as possible, it is not morally acceptable, and the safer vehicle is discriminately hit. Furthermore, the occupants of the colliding cars can be affected in different ways based on their demographics.

Kantianism Theory

Kantianism is a deontologist theory applicable in ethical decision-making. It serves to overcome the limitations of consequentialism, such as discrimination and unfairness in making decisions. According to Nyholm (2018), “Kantian ethics is about adopting a set of basic principles (“maxims”) fit to serve as universal laws, in accordance with which all are treated as ends‐in‐themselves…” (p. 5). When faced with ethical dilemmas, the theory demands that people create rules that they would be ready to implement as international laws applying to everyone (Nyholm, 2018). In other words, the approach promotes fairness without letting other people benefit unfairly during a car crash.

Assuming that a collision’s possible outcomes are human injury and property damage, this theory requires that the moral agent make it a universal rule that property damage is preferable to injuries. Similarly, when a crash is unavoidable, the computer should not target vulnerable motorists since this will amount to unfairness. However, decisions made using this approach tend to be rigid and can affect a large population since the rules are accepted universally.

Implications for the Future

Future research should consider supplementing rational approaches to ethical decision-making with technology-based techniques such as artificial intelligence (AI) and machine learning. Although engineers who design AVs prefer reasonable methods such as consequentialism since they are based on rules, they have severe limitations, as this paper has established. With AI, the AV software can learn ethics through observing human actions and or rewarding morally acceptable activities.

An AV’s decisions before and during an unavoidable crash are determined by the high-level logic coded in the software running it. In some cases, the vehicle can respond contrary to expectations. It is essential to formulate ways of understanding the rules set maintained by the programs as one way of troubleshooting the cause of the unusual behavior. The rules should be expressed in formats, which humans can interpret. This will promote an in-depth comprehension of the ethical dimensions of the AV’s actions.

Recommendations

Regarding the shortcomings of rational approaches such as Asimov’s laws and consequentialism, the following approach is proposed to make ethical decision making during AV crashes more effective:

Implementation of Rational Ethics

With the existing self-driving technology, this approach would reward actions which minimize universal harm. The software development stakeholders, such as the system developers, attorneys, and transport engineers, should agree on the reward program’s standards. The group must be allowed to offer insights into the cars’ development, and the makers should not be rigid in accepting input from the former. Some of the rules which can be formulated include injuries are better than fatalities, priority in protecting vulnerable users, and property damage are better than injury or death.

Furthermore, a safety algorithm should be developed to be applied in scenarios where some rules do not stipulate how an AV should act. Such an algorithm can rely on statistical data in healthcare, requiring that an intricate moral decision must have a numerical foundation. An algorithm developed by humans to control the AV cannot explore all the likely events in an ideal situation. This approach suggests that in case any scenario is not included in the rules created, the law contradicts with another, or the ethical decision is unknown, the AV should engage the brakes and maneuver.

We will write
a custom essay
specifically for you
Get your first paper with
15% OFF

Interpretation of Neural Networks using Natural Language

It is incredibly overwhelming to construe feedback by neural networks since they make decisions from several intertwined steps. It is not easy to reverse engineer the output to determine the source of the neural network. One of their disadvantages is, therefore, that they cannot explain their decision. It is essential to comprehend the logic around a car’s behavior in an AV crash, especially if the action was unexpected. Understanding how the system works makes it easy to understand the reason for an AV’s unexpected action during a crash. One way humans can understand the rules set in neural networks is by expressing the neural network’s knowledge using natural language. However, the method does not make each decision precisely with a law. Some of the rules guiding the decisions to take are also very intricate. However, interpretation of the instruction is an essential step towards understanding how neural networks work and how AVs make decisions, especially during unavoidable crashes.

Conclusion

Ethical decision-making can be used by individuals and corporations to take actions during crises, conflicts, or dilemmas, for example in IT-related cases. This study examined the case of a self-driving car for example, Google’s model and how it can make ethical decisions in case of a collision, whether it is avoidable or unavoidable. The paper discussed how ethical decision-making is done in AVs and explained how consequentialism and Kantianism, as rational approaches, can be applied in taking morally acceptable actions. Kantianism has been purposely used to overcome the shortcomings of the consequentialist methodology. Although the study found out that consequentialism has a few drawbacks, it emphasized that the method is better than the latter, including Asimov’s laws. Although an automatic vehicle cannot assure its occupants’ complete safety, this study is convinced that future investments in the product will consider ethical rules set which will minimize global damages in case of crashes as much as possible.

References

Beggin, R. (2021). Self-driving vehicles allowed to skip some crash safety rules. Government Technology. Web.

Duncan (2020). New federal self-driving car policy talks up government’s safety role but leaves industry in charge of technology. The Washington Post. Web.

Fleetwood, J. (2017). . American Journal of Public Health, 107(4), 532-537. Web.

Harris, M. (2017). The 2,578 problems with self-driving cars. IEEE Spectrum. Web.

Kohl, C., Knigge, M., Baader, G., Böhm, M., & Krcmar, H. (2018). . Journal of Business Economics, 88(5), 617-642. Web.

Marshall, A. & Davies, A. (2018). Waymo’s self-driving car crash in Arizona revives tough questions. Wired. Web.

Nyholm, S. (2018). The ethics of crashes with self‐driving cars: A roadmap, I. Philosophy Compass, 13(7), 1-10. Web.

Rudd, E. M., Rozsa, A., Günther, M., & Boult, T. E. (2016). . IEEE Communications Surveys & Tutorials, 19(2), 1145-1172. Web.

Roth, M. L. (2019). Regulating the future: Autonomous vehicles and the role of government. Iowa L. Rev., 105(3), 1411-1446. Web.

Szikora, P., & Madarász, N. (Eds.). (2017). Proceedings of 14th international scientific conference on informatics. IEEE. Web.

Zmud, J. P., & Sener, I. N. (2017). Towards an understanding of the travel behavior impact of autonomous vehicles. Transportation research procedia, 25, 2500-2519. Web.

Print
Need an custom research paper on Ethical Decision Making and Information Technology written from scratch by a professional specifically for you?
808 writers online
Cite This paper
Select a referencing style:

Reference

IvyPanda. (2022, July 3). Ethical Decision Making and Information Technology. https://ivypanda.com/essays/ethical-decision-making-and-information-technology/

Work Cited

"Ethical Decision Making and Information Technology." IvyPanda, 3 July 2022, ivypanda.com/essays/ethical-decision-making-and-information-technology/.

References

IvyPanda. (2022) 'Ethical Decision Making and Information Technology'. 3 July.

References

IvyPanda. 2022. "Ethical Decision Making and Information Technology." July 3, 2022. https://ivypanda.com/essays/ethical-decision-making-and-information-technology/.

1. IvyPanda. "Ethical Decision Making and Information Technology." July 3, 2022. https://ivypanda.com/essays/ethical-decision-making-and-information-technology/.


Bibliography


IvyPanda. "Ethical Decision Making and Information Technology." July 3, 2022. https://ivypanda.com/essays/ethical-decision-making-and-information-technology/.

Powered by CiteTotal, easy essay citation maker
If you are the copyright owner of this paper and no longer wish to have your work published on IvyPanda. Request the removal
More related papers
Cite
Print
1 / 1