Introduction
AI is being commonly used for security purposes, raising multiple concerns. Its users are mostly law enforcement and multiple businesses, as they implement facial recognition and other technologies. It is commonly used in China and the US, with the former strengthening concerns about the use of AI as a threat to privacy, society, and security. Some of these issues are discrimination, lack of transparency, privacy violations, and data breaches.
Main body
Currently, law enforcement and various companies are in the market for AI. Some of the ways being implemented are surveillance, facial recognition, crowd monitoring, and analytics on areas with a great likelihood of crime. Another notable way is several legal requests to Apple, Microsoft, and Facebook. It is especially common in China and the US, with the US Department of Homeland Security advocating for global sharing of biometric information (McGowran, 2022). The US program desires to create a means of sharing biographic and biometric information to ensure border security and information vetting.
While some locations view the use of AI as a beneficial measure, I do not agree with it. Primarily, it presents a high risk for privacy, society, and security. Regarding the first aspect, there is severe criticism regarding the accuracy rates of facial recognition technologies. It depends on multiple characteristics, such as lighting, false negatives, and camera position, as well as the physical features of a person. There is a high rate of facial profiling among African and East Asian individuals (Lee & Chin, 2022). Meanwhile, Europeans are usually the ones least affected by it.
Its dangers towards privacy are based on the fact that some AI technologies, such as FRT, are known as silent and passive. The latter means that it does not require consent or individuals being aware of its use. It becomes especially concerning when it is used more often for suspect identification in public. Finally, due to little transparency, the large amount of information about people being stored raises concerns about how AI data will be used by police. There is always a risk of data being stolen by malicious individuals for negative purposes, such as cyber attackers. No system is safe from security breaches, making everyone especially vulnerable (Gülen, 2023; Tuck, 2022). Thus, it represents a danger to stakeholders due to the multiple risks.
The emergence of the use of AI by companies and law enforcement became a rather concerning issue. Its first use can be found in China and the United States of America. China, however, represents a scenario of what may happen when AI is used to violate privacy and human rights (Gravett, 2022). Using the Sharp Eyes program, artificial intelligence is capable of identifying everyone who appears on cameras. It combines online purchases, travel records, and social media activity with a police cloud to control those perceived as delinquent.
I perceive it as more of a legal issue, as the regulation of the use of AI by police and companies has not been fully refined. Partially, it can be viewed as a moral one as well, since everyone has the right to protect their privacy and be safe from malicious individuals who may use their information for negative goals, such as identity theft or financial manipulations.
As mentioned earlier, some of the issues regarding the use of AI include lack of transparency, privacy, and human rights violations and discrimination. For example, in May 2022, the UK’s Information Commissioner’s office fined Clearview AI based on privacy violations. The company was ordered to abort the obtainment and use of the private data of UK residents and delete it from Clearview’s systems (Henriquez, 2022). This data was being collected without people’s consent by the company for an FR online database.
Conclusion
In conclusion, I consider the use of AI by companies and law enforcement to be a threat to safety, privacy, and society. It creates a severe risk of discrimination, data breaches, and privacy violations. With the US and China being its primary users, the latter represents a worst-case scenario for AI implementation and its negative outcomes. The non-consensual use of people’s data by cyber attackers is yet another worrying concern.
References
Chin, C., & Lee, N. T. (2022). Police surveillance and facial recognition: Why data privacy is imperative for communities of color. Brookings. Web.
McGowran, L. M. (2022). AI for policing: Where should we draw the line? Silicon Republic. Web.
Gravett W. (2022). The dark side of artificial intelligence: Challenges for the legal system. Judicial Commission of New South Wales. Web.
Gülen, K. (2023).Artificial intelligence security issues: AI risks and challenges. Dataconomy. Web.
Henriquez, M. (2022). Clearview AI was fined over $8 million for data privacy violations. Security Magazine | The business magazine for security executives. Web.
Tuck, A. (2022). Now you see me! How businesses use AI within surveillance. Technology Magazine. Web.