Do you think that Musk, Hawking, Wozniak, and Gates are right to worry about this type of innovation, or is it just an interesting subject for Hollywood movies? What should be done, if anything, to address this issue?
The question discussed by Musk, Hawking, Wozniak, and Gates is quite controversial, and it is impossible to answer it unambiguously. At the current stage of development, artificial intelligence is focused on performing specific tasks. Consequently, it does not have contextual awareness and does not have the capacity for flexible learning. Artificial intelligence imitates human activity; therefore, it does not pose a threat to humanity. However, in the future, it can become smarter than people who interact with machine intelligence (Jones 2015). Since artificial intelligence is given new possibilities, the connection between machines and people increases. Consequently, the risk that immoral aspects of society will affect artificial intelligence grows (Jones 2015). In this regard, the argument of experts that artificial intelligence will be capable of improving and reproducing itself is strong enough. In order to prevent the possible negative consequences, it is necessary to apply a creative approach to upgrading machine intelligence and engage specialists in collective work.
You can see from just the brief list above – which provides a few examples of innovations in business models, human health, food production, and artificial intelligence – that there are some very divisive issues related to some of them. What, if any, limitations should be placed on innovations (this could include technological, processes, business models, or other types of innovations)? In other words, just because something can be done should it be done? Where would you set the limit? Make sure to provide some examples of innovations that you think may not be positive developments.
It can be assumed that certain limitations should be applied to technological innovations. For instance, the use of robots has become standard practice. They are utilized in agriculture, production, and other activities. Also, artificial intelligence is used in medicine and military affairs. In relation to these two examples, it has become quite common to use semi-autonomous machines (Kaplan 2016). They can carry out their tasks without the engagement of humans. It poses a particular threat to the security of people and the environment. That is to say, limitations on innovations should be applied to the degree to which robots and machine intelligence can be autonomous. In addition, certain regulatory requirements should be imposed. Although artificial intelligence might potentially become independent, innovations ensuring its greater autonomy should be regulated by the government (Kaplan 2016). Therefore, autonomous robots and artificial intelligence used in military affairs, medicine, or other areas are examples of developments that may have negative implications.
Reference List
Jones, MT 2015, Artificial intelligence: a systems approach, Jones & Bartlett Learning, Burlington.
Kaplan, J 2016, Artificial intelligence: what everyone needs to know, Oxford University Press, Oxford.