We will write a custom Case Study on Ethical Dilemmas Surrounding Self-Driving Cars specifically for you
807 certified writers online
Countries are embracing technology, which is changing the operations of various organizations, particularly in dynamic and competitive environments. Through innovations, companies are able to increase their productivity, which allows them to sell products at lower prices with high quality. The purchasing power of customers has tremendously improved, leading to economic growth in various regions. In the recent past, the idea of self-driving cars was welcomed with much excitement. This was major because such innovation would enhance safety by eliminating human errors, which are the main cause of accidents on roads (Sam et al., 2016). However, during the testing of a driverless car by Tesla, a driver lost his life as the car crashed while trying to avoid a truck. In the Uber crash, a prototype vehicle struck the sidewalk killing a woman. The video taken showed that the driver in the car was shocked and could not do anything to save the woman (Lin, 2015). Based on these two cases and many others, this innovation is now facing numerous dilemmas forcing its proponents to pause and think of possible solutions to address the underlying issues.
Empathetic drivers would try to swerve a car to avoid hitting a pedestrian crossing the road based on instinct. In other scenarios, a motorist may choose to hit a cyclist with a helmet instead of one without because minimal damage would occur. However, for driverless cars, the dilemma of who should take responsibility for the cause of accidents is still debated. It is not yet clear whether to blame the programmers, automakers, or policymakers for the origin of accidents. After configuring autonomous cars with coded programs, they will make choices based on those instructions despite the circumstances. These vehicles rely on the Lidar and sensors such as cameras and radar to detect hazards on roads (Yun et al., 2016). Currently, automated companies use a system that is not standardized, as different choices are made depending on the core values and mission of an organization. The main concern is whether a self-driving car can make the same or better choices than humans.
Massachusetts Institute of Technology (MIT) has developed a moral machine that intends to explore possible choices for autonomous cars while on the road. Moral decisions are subject to biases since people hold different views in societies (Awad, 2017). What a scientist prefers may appear immoral to a religious leader and vice versa. Such differences are some of the dilemmas and drawbacks challenging this innovation. For instance, when I played the moral machine game, the result suggested that a driverless car should make a decision and hit pedestrians crossing the road wrongfully and save the passengers instead of plunging onto an oncoming truck. This outcome is clearly against my preferred ethical lens, which is a relationship. Drivers must adhere to ethical principles to ensure safety and good relationships. However, with an autonomous car, this will be a challenge since this is a robotic technology that does not have feelings or attitudes.
Based on the relationship lens, individuals need to use their reasoning skills or rationality to make decisions that would result in fairness, justice, and equality in the community. When faced with a controversial situation, critical thinking is applied to seek the truth and to ensure the common good is achieved as opposed to autonomy. People with this focus value their relationships with others in the community and strive to attain fairness and justice (Graham, 2018). The powerless and the vulnerable populations are equal to the rest despite the circumstances. To this end, when either faced with a scenario where a decision has to be made to hit wrongfully crossing pedestrians, save passengers in the car, or strive to save all stakeholders, the relationship lens would emphasize the latter.
Every life matters, which means that both stakeholders, the passengers, and pedestrians, rank equally. Nevertheless, the results lens focuses on the outcome and encourages individuals to be mindful of their self-interest or autonomy (West et al., 2016). The application of instinct or sensibility only aids in determining what is good for an individual. The common good is achievable by following one’s intuition and putting personal interests first. As such, this ethical lens ranks passengers above the pedestrians since the driver or autonomous car should prioritize saving those in the car first. Pedestrians are liable for breaking the law and, therefore, should pay for wrong decision-making.
The relationship lens gives intense preference for rationality and equality to achieve justice and fairness in a community when faced with a dispute. The powerless or vulnerable populations are treated equally as everyone else. The application of ideal actions and behaviors is based on truth and impartiality. Policymakers establish unbiased systems for the resolution of disputes as care values are accorded to everybody despite their status in society. Those found guilty of harming others are held into account, and rational reasoning is taken as the best perspective as opposed to pettiness (Roorda & Gullickson, 2019). The programming of autonomous cars should be done in such a manner that it applies rationality and equality to ensure the safety of all people. With full implementation of this lens, empathy, wisdom, and justice would be emphasized, enabling fair judgments to avoid adverse effects on either the pedestrians or passengers. This was the basis of building automated cars; giving more focus on an ethical relationship lens will help in improving this innovation, as safety will be maximized, reducing accidents and deaths.
Conversely, the application of the resulting lens in this scenario will lead to an unethical decision in the end. This is due to the fact that instincts and emotions influence behaviors and actions. To uphold self-interest, an autonomous car will hit pedestrians to save the lives of the passengers. Since everyone has the free will to choose how to behave and act to achieve personal goals and the greater good, those in the car would seek their safety first instead of pedestrians’ (Lin, 2015). The general belief is that people will make ethical decisions and take responsibility for their own actions. Consequently, since the foot travelers are in the wrong, the decision to hit them is justifiable in the courts. The autonomous car will not be responsible for damages or injuries incurred. This lens presents an unethical decision and may result in many accidents in the future. Humans have feelings and should use them to make fair and just decisions. Killing pedestrians on the basis that they are wrong is immoral. Therefore, this lens should not apply when establishing rules and regulations for automated cars.
In conclusion, to resolve the ethical dilemma caused by the introduction of self-driving cars on the road, the formulation of regulations should uphold justice and equality values. Every life matters, and there are no lesser beings than the rest. As such, research needs to be done by people involved, including policymakers, automakers, programmers, and philosophers, to try to create automated vehicles which will offer safer rides than traditional drivers. Accidents occur in a matter of seconds, and this poses many challenges for drivers to make suitable decisions. Since driverless cars can detect hazards on the road earlier, the decisions taken based on the programs should protect everybody using the road. The technology currently uses Lidar and sensors to detect danger. The system should be enhanced further to be able to notice oncoming vehicles far enough for emergency brakes to be applied. Programming of the cars should ensure safety by notifying drivers to respond immediately in case of emergency, especially in high traffic areas. This will ensure that the common good, as emphasized through the relationship lens, is optimized.
Since ethical conflicts are best resolved through engaging the antagonist parties, using an ethical lens inventory posed some challenges since coming up with fair and legitimate decisions proved difficult. I struggled to analyze the dilemma since, in doing so, I had to choose one right course while negating another. It was merely doing something right and wrong altogether. The desire to make a decision sometimes pushed me to overlook the facts, values, and opinions of other parties, hence making the whole process extremely difficult. However, choosing the most important values was easy since no answer was right or wrong. Upon filling in the inventory, a printout describing the preferred lens is automatically generated by the evaluation tool.
Awad, E. (2017). Moral machines: Perception of moral judgment made by machines (Doctoral dissertation, Massachusetts Institute of Technology). Web.
Graham, P. (2018). Ethics in critical discourse analysis. Critical Discourse Studies, 15(2), 186−203. Web.
Lin, P. (2015). The ethical dilemma of self-driving cars . TED. Web.
Roorda, M., & Gullickson, A. M. (2019). Developing evaluation criteria using an ethical lens. Evaluation Journal of Australasia, 19(4), 179−194. Web.
Sam, D., Velanganni, C., & Evangelin, T. E. (2016). A vehicle control system using a time synchronized Hybrid VANET to reduce road accidents caused by human error. Vehicular Communications, 6, 17−28. Web.
Get your first paper with 15% OFF
West, D., Huijser, H., & Heath, D. (2016). Putting an ethical lens on learning analytics. Educational Technology Research and Development, 64(5), 903−922. Web.
Yun, J. J., Won, D., Jeong, E., Park, K., Yang, J., & Park, J. (2016). The relationship between technology, business model, and market in autonomous car and intelligent robot industries. Technological Forecasting and Social Change, 103, 142−155. Web.