Updated:

Ethical Issues in the Artificial Intelligence Field Research Paper

Exclusively available on Available only on IvyPanda® Made by Human No AI

Introduction

When the term Artificial Intelligence (AI) is mentioned, people think of the many negative and morally wrong things. Most AI debates revolve around the morally problematic issues and outcomes that must be solved. However, it is essential to note that AI still has various positive features, which include economic benefits due to increased efficiency and productivity. This leads to the ethical and moral benefits of having more wealth and wellbeing, which enable people to have better lives. AI saves humans from tedious, repetitive tasks by having the ability to analyze quantities of data. Most of the negative ethical issues of AI arise from policy perspectives. This study will analyze ethical bias and accountability issues arising from freedom of expression, copyright, and right to privacy and use the ethical frameworks of utilitarianism and deontology to propose a policy for addressing the issues.

Freedom of Expression

AI has faced distinct challenges whereby the application of automation in the online media environment has negatively impacted freedom of expression. AI is an essential aspect of the media used by social media, search engines, and other internet services as information processing technologies (Llansó et al., 2020). However, various issues that affect its accountability have been raised, such as false positives, whereby some wrong information is classified as objectional and false negatives, where the AI misses some things which should be privileged as offensive. Therefore, when the AI calls for a false positive, it can flag or remove content that is not wrong, leading to a burden and affecting the freedom of expression (Llansó et al., 2020). On the other hand, if the AI calls for false negatives, it may fail to address issues such as harassment or hate speech which may affect an individual’s willingness to participate. This indicates that using automation has bias and accountability issues because it can negatively affect the freedom of speech through false positives or negatives.

Automation has demonstrated potential bias and algorithms for underrepresented groups leading to suppression of freedom of expression. Algorithms perform poorly on marginalized groups based on ethnicities, political leanings, or non-dominant languages caused by the possibility of biased training datasets and a lack of data (Llansó et al., 2020). Just like data is affected by real-world inequalities and biases, the automation systems trained on handling the data may amplify this. This can lead to significant repression of freedom of expression for marginalized individuals and communities.

Most AI systems are designed to work by viewing, reading, and listening to the works of humans, which has created the issue of copyright. AI systems must use books, films, photographs, recordings, articles, and videos protected by copyright. According to copyright laws, the owners of these works have the exclusive right to reproduce their work in many copies, and people who may violate these rights are infringers of copyright (Levendowski, 2018). Innovative technologies such as reverse engineering software have received much skepticism. However, there is still no clear rule on whether the materials used to train AIs are a copy of the Copyright Act of 1976 (Levendowski, 2018). This implies there is a much-debated topic on whether the materials made for AI training purposes can be regarded as “copies” and then be subjected to copyright claims.

The other issue is the use of copyrighted works in training the AIs. Judge Pierre Leval asked whether taking an interesting cartoon picture from The New Yorker magazine and photocopying it to stick on a fridge would be considered an infringement (Levendowski, 2018). Although the material belongs to the category of copyright, the de minimis doctrine would not consider that copyright infringement. This shows that in the future, the court is expected to handle loads of cases concerning this matter. The copyright law thus leads to three challenges; access, accountability, and competition. Concerning access, the copyright law may favor specific works more than others, encouraging AI trainers to rely on legally low-risk and easily available training data (Levendowski, 2018). From the competition perspective, the law can choose to restrain the implementation of bias mitigation strategies on the already available AI systems to limit competition. This shows that copyright law can increase access or competition regarding copyright.

Right to Privacy

Another common ethical issue regarding AI is its privacy and data protection. The two words do not mean the same in the AI world; privacy stands for concealment, while data protection is mainly about shielding informational privacy. Since the AI uses large data sets for training, it risks unauthorized personnel accessing that data. Additionally, there is an accountability problem posed by the ability of the AI to detect patterns even when there is no direct access to personal data (Gerke et al., 2020). For instance, a study proved that AI could identify the sexual orientation of individuals on Facebook, which poses privacy concerns (Gerke et al., 2020). This shows that AI can manipulate data in ways that were not foreseen before, making its accountability difficult.

Additionally, AI has been linked to data security issues such as cyber security. This has been a significant challenge affecting the digitalization of data. It has subjected the ICT department to new security challenges, such as model poisoning attacks and other forms of detection and exploitation (Gerke et al., 2020). AI systems reliability has been an issue of concern; there are still more worries regarding the opacity of the machine learning systems and how unpredictable their outcomes may be. The outcome of machine learning is mainly comparative to the quality of the training data, which is difficult to measure making its accountability hard to determine. In addition, the data’s integrity is at stake and vulnerable to other organizations and technical users, which implies that the privacy law is still a major ethical issue concerning AI.

Deontological AI Ethics

Freedom of Expression

In solving the first issue regarding freedom of expression, it is more effective to adopt the ethical framework of deontology. This ethical perspective is based on the rules and not the results. It supports equality by considering all humans equal; thus, everyone in society has the right to be heard (Guinebert, 2020). Deontology would be the most effective in forming a policy to address this ethical issue because it concerns the minority groups in the world. For instance, some people can use this as an opportunity to marginalize the freedom of expression of some minority groups by targeting the words they commonly use. An example is in the US, where the word “Nigga” is considered offensive, and black Americans mostly use it, so an AI trained to flag the word may hinder the African American’s freedom of speech. Thus, the best solution is to regulate platforms based on transparency, accountability, and non-discriminatory laws.

Utilitarianism: Copyright Claims

Utilitarianism should be used in solving the other ethical issue of copyright claims. It is essential that the work of every person is appreciated and they enjoy their creativity. In the production of AIs, the training language has been used differently by different companies; some allow the sharing while others take copyright claims. Therefore, it is vital to note that the main goal of AI is to make human lives easier and more flourishing (Guinebert, 2020). This shows that there is no need to have copyright claims on the training language of AI as it is aimed at improving the well-being of humanity. In addition, most of the training language has been obtained from other materials, so copyrighting would not be ethical. Based on the results, the theory suggests that a good action is the one that maximizes the happiness of the people. Therefore, based on the goodness of the people, the most effective policy would be that no company should be allowed to AI training language copyright.

Data Privacy

Finally, the other ethical issue regards the privacy of the data. This has been a major issue due to cyber-attacks and other threats that may impact data. To ensure that data is well protected, the most effective theory to apply would be deontology (Guinebert, 2020). This theory would help make data privatization by ensuring that all the rules regarding data privacy are followed. Thus, the most effective policy would be for every company with digital information or application of AI to have a data protection act and install the necessary software to guard their data.

Conclusion

The issue of AIs has been controversial, with some seeing the positive in it while others see the negative in it. However, it is evident that technological advancement is inevitable, and the world should prepare for AI by 2050. This shows that it is urgent to address the ethical issues concerning AIs to ensure that they will be more predictable and effective in the future. When well utilized, AI will play a great role in redeeming humans’ repetitive and tedious jobs.

References

Gerke, S., Minssen, T., & Cohen, G. (2020). . Artificial Intelligence in Healthcare, 12(3), 295–336. Web.

Guinebert, S. (2020). Zeitschrift Für Ethik Und Moralphilosophie, 3(2), 279–299. Web.

Levendowski, A. (2018). How copyright law can fix artificial intelligence’s implicit bias problem. Georgetown Law Faculty Publications and Other Works, 93(579). Web.

Llansó, E., Van Hoboken, J., Leerssen, P., & Harambam, J. (2020). Artificial intelligence, content moderation, and freedom of expression.

More related papers Related Essay Examples
Cite This paper
You're welcome to use this sample in your assignment. Be sure to cite it correctly

Reference

IvyPanda. (2023, June 28). Ethical Issues in the Artificial Intelligence Field. https://ivypanda.com/essays/ethical-issues-in-the-artificial-intelligence-field/

Work Cited

"Ethical Issues in the Artificial Intelligence Field." IvyPanda, 28 June 2023, ivypanda.com/essays/ethical-issues-in-the-artificial-intelligence-field/.

References

IvyPanda. (2023) 'Ethical Issues in the Artificial Intelligence Field'. 28 June.

References

IvyPanda. 2023. "Ethical Issues in the Artificial Intelligence Field." June 28, 2023. https://ivypanda.com/essays/ethical-issues-in-the-artificial-intelligence-field/.

1. IvyPanda. "Ethical Issues in the Artificial Intelligence Field." June 28, 2023. https://ivypanda.com/essays/ethical-issues-in-the-artificial-intelligence-field/.


Bibliography


IvyPanda. "Ethical Issues in the Artificial Intelligence Field." June 28, 2023. https://ivypanda.com/essays/ethical-issues-in-the-artificial-intelligence-field/.

If, for any reason, you believe that this content should not be published on our website, please request its removal.
Updated:
This academic paper example has been carefully picked, checked and refined by our editorial team.
No AI was involved: only quilified experts contributed.
You are free to use it for the following purposes:
  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment
1 / 1