Autonomous Vehicles: The Ethical Operational Parameters Research Paper

Exclusively available on Available only on IvyPanda® Made by Human No AI

Introduction

Most cars in our time are in the “1” and “2” stages of autonomy. Mostly, control is carried out by the driver, but cruise control, automatic parking and the like are used. Large companies (Google, Toyota, Tesla, Volvo) strive to achieve complete automation of car movement. This technology shows its high development prospect and, therefore, continues to develop. Unbelievable progress has been made in the autonomous vehicle industry over the past few years. In 2014, the professional association SAE International proposed the classification of unmanned vehicles depending on the degree of automation. The classification includes six levels ‑ from zero to full automation.

General Statistics on Autonomous Vehicles

In mid-November 2019, Gartner calculated how many autonomous vehicles appeared in 2018 and 2019. It turned out that in 2018 the total number of new unmanned vehicles was 137,129 units, and in 2019 it doubled and amounted to 332,932 units (Gartner, 2019) According to analysts, by 2023 the number of autonomous cars will reach 745,705 units. The increase will be mainly observed in North America, Western Europe and China, since the countries of these regions will be the first to introduce unmanned driving rules (Dhawan, 2019, p.15). Although the forecast promises a rapid increase in the number of autonomous vehicles, the fact is that by November 2019 in no country in the world regulations have been adopted to legally operate autonomous vehicles. Manufacturers are not ready to invest in the development of models that cannot enter the market in the foreseeable future.

The Responsibility of Manufacturers and Drivers. Research Thesis

The lack of regulations is associated, in particular, with the difficulty of determining the legal liability of manufacturers and drivers of autonomous vehicles. Obviously, in the ethical aspect, the level of responsibility of manufacturers of such cars is no less than the responsibility of pharmacological companies, since in both cases it is about ensuring the safety of people. Testing autonomous vehicles should be no less rigorous than clinical trials. For example, despite numerous tests, the sensors and cameras of unmanned vehicles do not yet know how to fix all the details while driving.

Legal responsibility cannot be considered outside the ethical parameters of the operation of unmanned vehicles. Moreover, the development of the concept of legal responsibility should be based on ethical assessments, at least from the standpoint of the ethics of utilitarianism. For autonomous vehicles to become a universal good, engineers will have to teach the system how to make difficult decisions (Li, Zhang, Wang, Li, & Liao, 2018, p. 2-3). What should artificial machine intelligence do when it comes to a choice: either knock down a pedestrian or crash into a lamppost, endangering the life of the driver? Who should be given priority in this dilemma? In such situations, a person reacts spontaneously, but the choice of car is programmed initially. Thus, legal liability should be based precisely on ethical parameters. Given the long process of developing a precedent base, the ethical principles of the operation of autonomous vehicles will help legislators to develop appropriate regulations on the division of responsibilities between the manufacturer and the driver for different levels of vehicle autonomy. The extent of ethical operational parameters of autonomous vehicles and their impact on development and legal liability

Operation of Autonomous Vehicles

Different types of automation. As noted above, the autonomy of unmanned vehicles, i.e., their level of independence of the driver is rated on a scale from 0 to 6. “Levels” are set by SAE International, a professional association of automotive engineers. SAE standards have been adopted for use by government regulators, engineers, and car manufacturers, as well as investors (Ryan, 2019, p. 6). They describe six levels of automation: from its absolute absence to a fully automated control system, which means a system that behaves like a qualified driver in any situation. One can talk about autonomous driving already from the second level, on which the autopilot system resembles that used in passenger aircraft. This means that most of the way one can use autopilot, but the driver must take control in cases where the system cannot cope on its own ‑ for example, in an emergency road situation. The autopilot can be turned on or off at any time at the will of the driver and controls steering, vehicle speed, and braking. At level 5, a person is not required to do anything except start an autopilot and determine a destination. In other words, the Level 5 system must travel to any place where a qualified driver can drive, under any conditions that a qualified driver can handle, completely independently. However, while cars of the 5th level of automation are only in the plans of many large companies. Unusual situations can often arise on the roads, and the logical question arises ‑ if artificial intelligence can successfully cope in all such cases.

Transition to autonomous vehicles and challenges on the road. Jean-Francois Bonnefon and his colleagues say that people generally support the idea that in a critical situation, the car must crash into the wall or somehow sacrifice the driver to save more pedestrians. At the same time, the same people want to drive in cars that protect the driver at all costs, even if it will result in the death of pedestrians (Bonnefon, Shariff, & Rahwan, 2016). Such a conflict puts computerized car manufacturers in a difficult position. Between the car, which is programmed for the benefit of the majority, and which is programmed to protect the passenger, the vast majority of buyers will choose the second.

Comparison with the human decision process in driving situations. The authors of the study on the social dilemma of autonomous cars believe that there are other difficult moral issues in this area. Autonomous vehicles will have to make decisions in emergency situations whose consequences cannot be predicted in advance (Merat & Jamson, 2017, p. 514-515). Is it permissible, for example, to program a car so that it avoids a collision with a motorcyclist by crashing into a wall? After all, the passenger of the car in this case is more likely to survive than the motorcyclist who collides with the car. A high level of process automation without complete autonomy can create a false sense of security for the driver. This means that it will be difficult for drivers to quickly take matters into their own hands if the automatic control system ceases to cope. That is why proponents of full automation offer as soon as possible to move to what is called the fourth level according to the classification of SAE ‑ autonomy without driver involvement in the process of movement in certain areas. For example, the car that Ford plans to launch in production in 2021 will be fully automated ‑ it will have neither a steering wheel nor a brake pedal. This means that the driver will not have to take over the control of the vehicle at some point ‑ it will know what to do in any situation (Bagloee, Tavana, Asadi, & Oliver, 2016, p. 288). In this regard, a logical question arises about the very need for full automation of cars.

The first and main reason is the desire to reduce mortality in accidents provoked by driver errors. The human factor overtakes all technical problems together (the brakes failed, the gas pedal stuck, the navigator directed into the abyss, and so on). However, it should be remembered that artificial intelligence is also not without flaws. Let us draw an analogy with aviation, where the primary and most important problem in creating autopilots is also to maintain flight safety. In case of violation or breakdown of the autopilot, it is imperative to turn off the system in the usual way or mechanically. When developing an autopilot, options for disabling it in the event of a breakdown without harm to flight are carefully thought out. To increase security, control automation operates in multi-channel mode. In parallel, four piloting systems with the same parameters and capabilities can work simultaneously. The system also carries out continuous analysis and monitoring of incoming information signals. The flight is carried out on the basis of the so-called quorumation method, which consists of making decisions on the data of most systems (Cusick, Cortes, & Rodrigues, 2017, p. 26). In the event of a breakdown, the autopilot is able to independently select a further control mode. This can be a switch to another control channel or a transfer of control to the pilot. To check the operation of systems, it is necessary to carry out the so-called preflight run of systems. This test consists of starting a step-by-step program that emulates flight signals. Whether such a run is carried out before the start of the movement of the unmanned vehicle is a rhetorical question. Moreover, whether automatic transmission control is provided for the pilot of an autonomous vehicle in the event of an emergency or breakdown is a matter of critical importance. Also, unlike, for example, an airplane engine failure, a tire that has burst out at full speed will not give time lag for “reflection” and taking measures ‑ if the car moves at high speed, then when one of the wheels is destroyed, it is almost impossible to avoid an instant catastrophe.

Ethical parameters of autonomous vehicles

Safety of Self-Driving Cars

When introducing unmanned vehicles, real problems began to appear, which analysts had only predicted before. In February 2017, a car with an autonomous control system developed by Google fell into a small accident. Lexus RX450h, bypassing an obstacle on the road, touched a bus moving nearby, as it incorrectly calculated the possible actions of its driver, deciding that he would give way. Then Google admitted its partial fault for the incident and said that it had made changes to the software of their cars in connection with it. In May of the same year, a fatal accident occurred with a Tesla car moving with the autopilot function turned on (Martinez-Dias & Soriguera, 2018, p. 177). This case will only strengthen the discussion and development of legislative provisions and regulations, as well as ethical concepts for the operation of autonomous vehicles.

Advancement of Artificial Intelligence and Ethical Concerns

Due to the fault of autopilots in planes, there were many crashes and accidents that led to human casualties. Unfortunately, the history of air crashes caused by automatic control systems is very rich in the facts of the unreliability of such systems, and this is despite the very strict principles of certification of such systems and state regulation in this industry (Cusick, Cortes, & Rodrigues, 2017, p. 64). This also makes it possible to cast doubt on the reliability of autopilots in cars, since in this case high competition in the industry, pushing automakers to launch new models on the market as soon as possible, leads to a reduction in time and costs for R&D, including autopilots. The pursuit of greater profits and the seizure of a larger segment of the market may push security concerns into the background.

Need for Human Input in Emergency Situations

When programming unmanned cars, scientists solve a dilemma: programming a car to save drivers or pedestrians. Autonomous cars can revolutionize the transportation industry, but they pose a social and moral dilemma that can slow down the spread of this technology (Bagloee et al., 2016, p. 286) The goal of the creators of unmanned vehicles ‑ to make car traffic safer and more efficient ‑ is noble, but naive. Hundreds of nuances of movement on the roads of a particular country and even a specific city play a role that cannot be reduced to the task of increasing the efficiency of movement. Researchers have gathered opinions about what moral principles should be guiding for self-driving cars in emergency situations when it is impossible to avoid human casualties and there is the need to choose the ‘lesser of evils.’ Respondents from different countries disagreed on many points, such as the need to care primarily about saving children, women, or those who follow the rules of road traffic. The revealed differences correlate with the well-known economic and cultural characteristics of countries and regions (Holstein, 2018). The study showed that developing a universal “moral law” for self-driving machines would not be easy. It is unclear how self-driving cars should behave in emergency situations, when the distribution of risks between people depends on the decision made by artificial intelligence. It is expected that autonomous vehicles will fall into such situations less often than cars driven by human drivers. However, still, sometimes this will necessarily happen, which means that artificial intelligence must be ready to solve moral dilemmas in the spirit of the famous “Trolley Problem.”

The “Trolley Problem”

In a number of other studies of the “trolley problem” in relation to unmanned vehicles, it turned out that in general the majority of respondents supported the so-called utilitarian approach, that is, in any situation, approve a solution that would lead to a minimum number of victims. At the same time, the adoption of such a decision as a law was approved by fewer participants, especially if it was a question of their car, and not of the situation “as a whole” (Anderson et al., 2016, p.8). To solve this problem, a simple solution is proposed at first glance ‑ a car, even in an emergency, must strictly adhere to the rules of the road. However, again drawing an analogy with aviation, we recall that there is a clause in the airline guidelines that the pilot flying the aircraft has the right to deviate from any instructions if he acts based on experience and knowledge and in order to complete the flight. Namely, the human factor, or rather, human intelligence and its creativity can save the lives of passengers, which an onboard computer can do neither in an airplane nor in a car.

The human factor affects the navigation of unmanned vehicles much more than it seems, including in a negative sense. Engineers taught cars to obey traffic rules, but they do not know how to properly respond to violations of other drivers (Cunningham & Regan, 2015). If someone speeds up or ‘cuts off’ the car, this can lead to a navigation failure due to the lack of appropriate algorithms. In the future, this problem can be solved using V2V (vehicle to vehicle technology). Namely, this service helps planes avoid collisions: they exchange information about their position, speed, and direction. However, for now, an automotive analog of the service is only in development. Moreover, technology will have an effective impact only if the vast majority of cars on the road are equipped with it.

Driver Behavior and Liability. Manufacturer Liability.

In addition to technological challenges, for the transition to the mass use of autonomous cars, many issues have to be addressed at the level of legislative regulation. Normative documents are needed that define the basic technological and legal concepts in this area, regulate the possibilities of using such technologies in general, responsibility in case of incidents with unmanned vehicles, etc.

The problem of legal regulation of autonomous vehicles can negate all its advantages. Due to the need for legal regulation, it is necessary to determine the composition of administrative offenses and criminal offenses associated with the use of unmanned vehicles (Ryan, 2019, p. 18). Also, it is required to identify the subjects of civil liability for causing property damage through the fault of unmanned vehicles. All these problems remain unresolved and require the development of a new legislative framework that will regulate this side.

Current Existing Laws

In one form or another, regulatory documents in this area have already been submitted or are being developed in some countries. The United States has especially advanced here: in 2011, Nevada was the first state in the country to begin regulating the use of autonomous vehicles on the roads and issues related to their insurance, safety, and testing (Smith, 2014, p. 415). Arizona lawmakers in the United States tried to pass legislation regulating the use of unmanned vehicles on state roads. However, they could not solve the core problem: who should be responsible for the accident that occurred with the participation of the autonomous car: the owner of the car, the company that developed the technology, or the automaker who made and sold the car? In California, in December 2015, authorities announced preliminary rules for unmanned vehicles. The fixing of these rules in the form of law is necessary before one of the vehicles can be sold to consumers. They require drivers, if necessary, to be ready to take complete control of their car. The regulations proposed by the California Department of Automobiles suggest that drivers remain responsible for complying with traffic regulations, whether they are driving or not.

Legislators also propose that part of the responsibility be placed on the autonomous car operator (Ryan, 2019, p. 19). The operator is the passenger, but the right to complete passivity will not be recognized for him soon. If the operator sees a dangerous situation and illogical actions of the autopilot, his duty is to intervene and try to avoid the danger. There is a contradiction: technology offers to relax, and the law keeps in suspense. Questions regarding the responsibilities of people with disabilities also remain open.

The prosecution when the accident occurs due to a semi-autonomous car does not differ from the situation when the accident occurs with the participation of the car under the control of the driver. The presence of autopilot can affect the issue of accountability only if the manufacturer claims full autonomy of the car without the need for human control for safe movement. In this case, the fact of an accident due to a defect in autopilot may be the basis for laying civil liability ‑ compensation for harm ‑ on manufacturers, as well as bringing to justice individuals who are guilty of a defect. Here we are talking about shifting responsibility from the owner of the car to its manufacturer and responsible individuals. Some authors have proposed that manufacturers be “strictly liable” for personal injuries caused by driverless automobiles (Hubbard, 2015, p. 1867). Never before have manufacturers faced the need to produce cars with protection against the fact of an accident or from mistakes made by other drivers. In the case of autonomous cars, questions arise regarding the correctness of the “decisions” made by the car based on the algorithms laid down in it. As long as the cars require the presence of a driver, improper operation of the autopilot should not be considered as a defect that leads to an accident. The presence of autopilot should not affect the rules of accountability applicable for accidents involving a conventional car. An exception may be the situation when the owner of a car with an autopilot proves that he did not know about the need to constantly monitor it.

Discussion on Future Legal Frameworks

Unmanned vehicles open up a new legal world. In the case of ordinary cars, the responsibility for the incident is likely to rest with the person sitting inside the car. But an unmanned vehicle is a hardware-software complex, which is influenced by many parties, the experts explain: the manufacturer of the car itself or special equipment, the developer of the artificial intelligence system for it, the service, and the owner of the fleet to which it attributed to. It will not be easy to find the culprit among them.

Risk insurance offers for unmanned car drivers already exist. The passenger (“driver”) ensures his liability in case of an incident, as well as his life and health. Such insurance has two functions: it allows (especially the earlier versions) to relax along the way and protects manufacturers of unmanned vehicles from claims ‑ in this case, insurance companies take responsibility. The European Parliament proposes to think about the special legal status of the electronic personality for complex robots that make independent decisions ‑ then they can be blamed for compensation for the damage caused by them. One of the proposals of the European deputies is to create a special fund, by analogy with car insurance, from which damage would be compensated when there is no usual insurance coverage. The European Parliament has called on insurance companies to develop new products appropriate to the development of robotics (Ryan, 2019). Now auto insurance covers only human actions and mistakes, and the insurance system for robots should take into account all possible liability.

It should be noted that the introduction of innovative technologies in any area of our lives entails changes, including in the legal field. Accordingly, it is necessary that the legal regulation of this sphere be ahead of actual technical implications and that, with reasonable regulation and protection of the interests of subjects of various legal relations, it would not deter, but contribute to the development of advanced technologies in the interests and under the control of the human mind.

Conclusion

Summarizing Main Points

The research in the field of ethical and legislative implications of autonomous vehicles spread and development allow concluding that today, it is one of the most uncertain and challenging areas both for ethics and justice. Both the development of a universal “moral law” for self-driving machines that clearly defines the algorithm for action in a critical situation, and the development of the necessary legislation is very difficult and even somewhat deadlocking. However, a comparison with the aviation industry suggests the advisability of requiring the driver to monitor the situation while the car is moving and to be able to turn off the autopilot if necessary and take control.

Potential Recommendations

Driver comfort should not be placed above public safety. Thus, the production of cars without a steering wheel and pedals (something that corresponds to the fifth level of autonomy and what all well-known automakers strive for), at least in the near future seems impractical. In addition, this will simplify the problem of legislative regulation in the field of unmanned vehicles, by clearly distributing responsibilities between the driver and the automaker. The black box data in each case will help determine the cause of the accident. Moreover, it can eliminate ethical dilemmas of AI decision-making, described above. The experience of aviation can become a solid base for considering the ethical and legal implications of autonomous vehicles.

Final Statement

As autonomous vehicle technology expands and becomes more popular, manufacturers and lawmakers must collaborate to create ethical boundaries and capabilities in line with legal liabilities according to the applied jurisdictions to ensure safety and efficient implementation.

References

  1. Anderson, J. M. et al. (2016). Autonomous vehicle technology: A guide for policymakers. RAND Corporation. ISBN: 978-0-8330-8398-2.
  2. Bagloee, S., Tavana, M., Asadi, M., & Oliver, T. (2016). Autonomous Vehicles: Challenges, Opportunities and Future Implications for Transportation Policies. Journal of Modern Transportation, 24(4), 284-303.
  3. Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.
  4. Cunningham, M. & Regan, M. A. (2015). Autonomous Vehicles: Human Factors Issues and Future Research. Proceedings of the 2015 Australasian Road Safety Conference, 14 – 16 October, Gold Coast, Australia. Web.
  5. Cusick, S., Cortes, A., & Rodrigues, C. (2017). Commercial aviation safety. NY: McGraw-Hill Education. ISBN: 9781259641824.
  6. Dhawan, C. (2019). Autonomous vehicles plus: A critical analysis of challenges delaying AV nirvana. Victoria, Canada: FriesenPress. ISBN: 1259641821.
  7. Gartner (2019). . Web.
  8. Holstein, T. (2018). Ethical and social aspects of self-driving cars. ARXIV, 18.
  9. Hubbard, F. P. (2015). “Sophisticated Robots”: Balancing Liability, Regulation, and Innovation. Florida Law Review, 66(5), 1803-1872. doi: Web.
  10. Li, S., Zhang, J., Wang, S., Li, P., & Liao, Y. (2018). Ethical and legal dilemma of autonomous vehicles: Study on driving decision-making model under the emergency situations of red-light running behaviors. Electronics, 7, 1-18.
  11. Martinez-Dias, M. & Soriguera, F. (2018). Autonomous vehicles: theoretical and practical challenges. Transportation Research Procedia, 33, 275-282.
  12. McBride, N. (2015). The ethics of driverless cars. SIGCAS Computers & Society, 45(3), 179-184.
  13. Merat, N. & Jamson, A. H. (2017). How do drivers behave in a highly automated car? Proceedings of the Fifth International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, 514-521.
  14. Ryan, M. (2019). The future of transportation: Ethical, legal, social and economic impacts of self-driving vehicles in the year 2025. Science and Engineering Ethics, 25(109), 1-24.
  15. Smith, B. W. (2014). Automated Vehicles are probably legal in the United States. TexasA&M Law Review,1(3), 412-521.
More related papers Related Essay Examples
Cite This paper
You're welcome to use this sample in your assignment. Be sure to cite it correctly

Reference

IvyPanda. (2022, January 17). Autonomous Vehicles: The Ethical Operational Parameters. https://ivypanda.com/essays/autonomous-vehicles-the-ethical-operational-parameters/

Work Cited

"Autonomous Vehicles: The Ethical Operational Parameters." IvyPanda, 17 Jan. 2022, ivypanda.com/essays/autonomous-vehicles-the-ethical-operational-parameters/.

References

IvyPanda. (2022) 'Autonomous Vehicles: The Ethical Operational Parameters'. 17 January.

References

IvyPanda. 2022. "Autonomous Vehicles: The Ethical Operational Parameters." January 17, 2022. https://ivypanda.com/essays/autonomous-vehicles-the-ethical-operational-parameters/.

1. IvyPanda. "Autonomous Vehicles: The Ethical Operational Parameters." January 17, 2022. https://ivypanda.com/essays/autonomous-vehicles-the-ethical-operational-parameters/.


Bibliography


IvyPanda. "Autonomous Vehicles: The Ethical Operational Parameters." January 17, 2022. https://ivypanda.com/essays/autonomous-vehicles-the-ethical-operational-parameters/.

If, for any reason, you believe that this content should not be published on our website, please request its removal.
Updated:
This academic paper example has been carefully picked, checked and refined by our editorial team.
No AI was involved: only quilified experts contributed.
You are free to use it for the following purposes:
  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment
1 / 1