Introduction
The phenomenon of artificial intelligence, or AI, is not only utilized on a large scale in many fields of our life but also firmly embedded in our culture. Since its emergence and introduction in various aspects of human activities, it has been the center of attention of scholars and the general public due to its almost mythological nature and its capabilities which are strikingly similar to those of living beings.
Naturally, this similarity and the unparalleled novelty of the concept has served as a basis for many speculations, including the purported threat the AI poses to humanity. The suggested origins and probabilities of the threat vary, and range from the purely fictional speculations of “rise of the machines” to the serious works trying to assess risks in different fields, with the former being both the most unrealistic and, counterintuitively, most popular and well-established in our consciousness.
Despite the fictional and speculative nature of the majority of implications connected to the supposed threat that the artificial intelligence poses to mankind and the resulting low credibility ascribed to all such suggestions, at least some of the concerns voiced regarding the possible adverse effects on human society require an in-depth inquiry instead of being dismissed on a common grounds.
The relevance of the topic
First, it is important to understand the relevance of the topic. The primary reason for that is the ubiquitous nature of the phenomenon. The machines capable of processing data are present in almost every aspect of our everyday lives, as well as almost any activity that requires processing at all. This leads to an assumption that the level of control granted to artificial intelligence is so overwhelming that humanity has almost given itself into the hands of some other entity.
Not only that, but this entity also demonstrates some traits that have been previously thought as proprietary to intelligent beings, and does so much more effectively than us. Simply put, AI is so much in control of our daily lives it is certain that once it “decides” to take matters into its own hands, there is little left to do to oppose it. At this point, the narrative usually points to the fact of the incredible pace at which the matters are developing, and the capabilities of the AI are increasing and ends by concluding that once the artificial intelligence will become capable of making its own decisions (the fact that is inevitable and fast approaching), it could, and probably will, seize the power, or at least disrupt the current state of events.
At this point, it becomes evident why this debate rarely reaches the depth and scope it demands: these assumptions are rarely scientific or even remotely close to being backed up y the evidence. The matter is further complicated by the fact that the majority of the specialist in artificial intelligence tend to view the matter as an extension of the popular phenomenon and dismiss it as unsupported with hard evidence or just not worth discussing.
The common explanation usually includes the uneven distribution of capabilities that the AI possesses: while the calculation speed of a current computer supersedes that of human by millions of times, this argument is only applicable to the most basic mathematical functions, and with the increasing complexity of operations computers gradually lose by comparison, to the point where the most basic of information interpretation activity that is routinely performed by an average man daily becomes impossibly complex for the most current and up-to-date machine. The primary reason for this is the complexity of the information and the uneven distribution of such factors as the context in the process of interpreting it.
The most often conclusion is “the AI is still not imaginative/smart enough to pose a serious threat.” However, while the implications of an actively malevolent AI can be safely dismissed on these grounds, the same cannot be said about other complications that still exist as a result of the ubiquitous nature of the AI but are unaddressed as scholars rarely go beyond the argument mentioned above.
Artificial superintelligence: beyond rhetoric
Despite the vague nature of the implications that are characteristic of the current debate, some important questions can be outlined as predominant. Karamjit Gill, in his article Artificial superintelligence: beyond rhetoric (2016), outlines several major issues that may be called a threat posed by AI. First, he points to the progress in automation technologies that require at least a partial degree of freedom that must be granted to the artificial intelligence for it to become more effective.
While the technology currently allows the solving of rather complex problems, the question remains regarding the ethics that the AI should utilize in the process. A good example supplied by Gill is the oncoming Google car that will be capable of automatic guidance and, as a result, will inevitably be challenged with controversial situations like the need to prioritize the life of a passenger versus the life of the driver of the car that threatens a collision (2016, p. 137).
It is a well-known problem, partly because humanity itself has not yet been able to give a definitive answer to it. Besides, the said question reaches far beyond the domain of automated transportation. Gill also mentions the automated weapon systems as a more feasible threat (2016, p. 138) – not as a result of deliberate action, but rather as a miscalculation that often occurs whenever the AI deals with the multi-layered context-sensitive data.
Finally, the paper highlights the vulnerabilities that emerge in the field of the economy as a result of the introduction of AI into the process. In particular, Gill points to the complications that inevitably emerge as the economy gradually becomes digital and thus less transparent and susceptible to human control (2016, p. 137). He further develops the idea by introducing the concept of artificial general intelligence (AGI), which is a self-learning and self-regulated system, and one which is not easily monitored (2016, p. 139).
This obscurity introduces unpredictable outcomes and additional risks which are still difficult to estimate due to both the novelty of the concept and its rapid pace. Gill concludes the paper with the notion that “the debate on artificial super-intelligence highlights the need for an ongoing conversation between technology and society” (2016, p. 143).
Conclusion
The paper by Gill does not give any definitive answers to the raised questions. Instead, it suggests the general directions for further inquiry, like the creation of instrumentation for further development of the solutions to manage current risks. However, the paper is especially valuable because it defines the feasible and actual risks connected to AI and separates them from the ones that are generally dismissed as unsubstantiated and hamper the progression on real threats. In other words, the paper helps us to see the threat the AI currently and potentially poses to the economy as well as other branches of human activity.
References
Gill, K. (2016). Artificial super intelligence: beyond rhetoric. AI and Society, 31(2), 137-143.