The scenarios presented in the Matrix or Terminator do not seem that unrealistic these days as robots are already surgeons, scientists, builders, toys, and even people’s friends. However, will they remain friends? That is the question! Many have expressed concerns about the future partnership of people and machines that have the potential capability of surpassing them. Greenwald (2017) notes that such emerging technologies are quite new and people do not yet fully understand the possible outcomes of AI development (par. 3). However, some challenges have already become apparent. While Schmitt et al. (2011) state that AI is still unable to come up with “creative” solutions, essential in various situations such as disaster management, health care, and science (p. 169), at the same time, AI systems can pose considerable threats to people’s future due to their ubiquity in the modern world. Some strategies to address these challenges exist; however, the strict maintenance of key areas under human control is the only valid solution to ensure people’s safety. This can be easily implemented either through the use of mechanic tools (a simple closing switch) or sophisticated software viruses.
One of the strategies put forward to address machines’ empowerment, and regarded as an effective solution, is making machines explain their choices. At present, the algorithms (a machine’s ‘reasoning’), is protected by law as it is deemed the private property of software developers. For Cave (2016), this is unacceptable as it could only be a matter of time before machines could decide to destroy humanity (par. 4). Indeed, while some believe that many potential problems can be avoided through understanding the way machines think (Basulto, 2015, par. 6), this solution does not stand up to scrutiny. Either through lack of initial understanding, or higher intelligent AI trying to be deceptive, if people are not able to understand machines’ decisions, how are they able to, then, understand machines’ explanations for those decisions? Finally, any realization that machines were aiming to destroy humanity could come too late, and people would be unable to do anything.
Another solution offered is the alignment of machines’ needs with the needs of people. People fear that once AI systems become autonomous, people could be regarded as resources, or even a threat, and in both cases, humanity could be destroyed (Basulto, 2015, par. 7). Therefore, researchers assume that people will be safest if machines’ needs were aligned with their, at least, basic needs. Some claim that this alignment is possible if people “program in to each robot all of moral philosophy” (Cave, 2016, par. 6). This strategy is difficult to implement though as there is no absolute or complete moral philosophy thus far. Also, people are unlikely to be able to make machines feel total empathy or take into account all human needs.
Fortunately, there is an answer through people securing their complete control over the most important areas (for instance, weapons control). Indeed, Cave (2016) argues this issue of control is central to the debate concerning AI (par. 2). Some commentators are pessimistic as they say that autonomous systems will be able to surpass any limits and “bonds” created by humans (Basulto, 2015, par. 4). However, people can create viruses that will enable them to shut systems down. People can override AI systems by using simple tools like closing switches, and these systems still need some kind of power to function. People can control this area in the case of emergencies.
In conclusion then, although various predictions concerning people’s co-existence with machines exist and raise valid safety concerns, the most obvious and viable option to counter these issues is to ensure complete human control over key areas. If not, people are potentially in great danger, as machines created to help them could easily use their capabilities to harm or destroy. It is also vital to continue developing different strategies and seek further ways to safeguard this control in order to ensure people’s safety for the future of humanity.
Works Cited
Basulto, Dominic. “The Very Best Ideas for Preventing Artificial Intelligence from Wrecking the Planet.” The Washington Post, 2015, Web.
Cave, Stephen. “Artificial Intelligence: A Five-Point Plan to Stop the Terminators Taking over.”The Telegraph, 2016. Web.
Greenwald, Ted. “How AI IS Transforming the Workplace.”The Wall Street Journal, 2017. Web.
Schmitt, Diane, et al. Focus on Vocabulary. Pearson Education, 2011.