Streaming Murder on Facebook Live Essay

Exclusively available on IvyPanda Available only on IvyPanda

Facebook is a social media resource that millions of people use worldwide for various purposes. As with all such global phenomena, traditional news sources often focus on the cases of misuse. One such case is a shooting perpetrated by Steve Stephens in Cleveland, which he streamed using Facebook’s services. That case, among others, spurs debates on whether Facebook should create more prompt and punitive systems to purge violent content.

We will write a custom essay on your topic a custom Essay on Streaming Murder on Facebook Live
808 writers online

The discourse in the media is that people must not see the violent content under the guise of “safety.” Aiken explains how seeing violent content can desensitize and disinhibit people and thus lead to them perpetuating the violence they see (1). However, the researchers that study online disinhibition make a point that it is situated firmly online, as the factors that drive people to behave a certain way on the Internet are simply not present in the real world (Wu, 2).

The argument that the media makes is unscientific and primarily based on emotions and profit. I am deeply troubled by the ubiquitous drive to ban everything, create safeguards, and police the entire Internet. Not only is it unethical and authoritarian, but it also creates the opposite result. Hobbs and Roberts describe a sort of a Streisand effect, where actively suppressing content incentivizes users to seek it out and evade the systems that block access (3). For example, the Christchurch massacre was actively suppressed by social media, which only lead to it becoming a worldwide viral sensation.

The media that report on violence in such a manner do not have the best solutions in mind. This phenomenon can be explained as a particular case of moral panic (Goode, 4). There is nothing new about deviant behavior, vicariousness, and morbid curiosity, yet the media reports on it as if it were an unprecedented epidemic of violence. A similar type of panic is centered around violent video games, and the resemblance is apparent. Markey and Ferguson describe how the researchers, legislators, and media outlets are reacting disproportionately strongly to a relatively insignificant problem and completely unrelated factors (5).

Many publications called for a more sophisticated system of removing violent content following the shooting. Such sentiments as “Facebook must” and “Facebook’s responsibility” were thrown around by major news sources. Newcomb wrote that live video streaming is an essential tool for democracy to hold the authorities accountable (6). The article refers to the recordings of police shootings, where the victims were innocent African American civilians. That is a valid point, but the political salience of the murder does not change the fact that thousands of people will see a person getting shot to death. It reveals a confounding double standard, where politically favorable murders are okay to show to the public, but other murders should be instantly purged.

If Facebook has a moral obligation to save the victims of violent crime or emotional turmoil, it is not clear how instantaneous suppression of content and banning people from its platform fulfills that obligation. Newcomb explains how Facebook gave suicidal streamers access to helplines and resources that might help them in times of crisis (6). That is a more constructive approach, where instead of instant censorship, people are given more information. According to Aiken, Stephens explained his motives during the live recordings of the shooting (1). Hearing out disturbed and murderous individuals can unveil elusive social problems that need to be solved. Helping people with mental issues is just as important as exposing police shootings.

With that in mind, I believe that Facebook has, if not an obligation, then at least the means to help its users. Firstly, giving streamers the resources to help themselves in times of crisis can be instrumental in suicide prevention. Secondly, urging people to alert the authorities in case of a violent crime being broadcast can help improve police response. Thirdly, not trying to shut down questionable content can help it remain relatively obscure. It can also help expose more underlying problems in society, which is ultimately a worthy cause.

1 hour!
The minimum time our certified writers need to deliver a 100% original paper

Even if, despite the evidence to the contrary, we assume that Facebook must continue to police its platform, there are ways to improve these processes and make them more ethical and user-friendly. Myers West provides numerous user reports that shed light on how little transparency and human contact there is in content policing (7). Users often find their content or accounts removed erroneously or maliciously, and they cannot find an actual human to explain the reasons for the removal. Social media platforms automate these systems, which leads to many errors and complicated appeal processes. Increasing personalization and peer-reviewed human oversight can improve content policing and lower the degree of error or bias.

Another problem is the outsourcing of content moderation abroad with little regard to actual workers. The moderators are often contracted from developing nations and given quotas for reviewing unacceptable content (Dwoskin, 8). What gets removed is often up to them, not actual Facebook employees. The content moderators work under severely restrictive NDAs that prohibit them from criticizing and exposing the harmful workplace practices (Newton, 9). Severely restructuring their contracts and improving their workplace conditions could be an important step towards creating a more ethical platform.

In addition to a vast number of content moderators, Facebook has laid plans to create an Oversight Board. This Board will consist of eleven to forty members, and it will review appeals to policy decisions, as well as make recommendations on what could be improved (Constine, 10). The Oversight Board is primarily tasked to remedy the problem of automated removals and lack of human interaction that Myers West described (7). However, the Board can also act as a deflection for accusations of malpractice and unethical behavior. Another criticism is that the Board’s decisions are not actually binding, and the executives are free to ignore them.

Contrary to the discourse in the media, Facebook does not need more safeguards for its content. If the existing practices are any indication, the implementation of these new safeguards can further limit the users’ freedom of expression. According to Constine, Facebook has already been pressured by political parties to remove ideologically incompatible content, despite it not being violent, illegal, or against Facebook’s terms of service (10).

More formal mechanisms to remove content could only harm the platform. The already-mentioned automated appeals system and lack of human contact will prevent users from appealing these new removals and suspensions. It is not guaranteed that the nascent Oversight Board is going to fix these problems. Its performance should be evaluated by the public before any new tools for censorship are introduced.

Facebook is a major social network that employs and outsources tens of thousands. More than a billion people visit the platform daily and post an immeasurable wealth of content. It is only natural that some of the content features violence. That said, it is not yet clear whether the content is actually harmful to regular users. The media is prone to moral panics, but the decision-makers and scholars should keep a level head when tackling such sensitive issues. There are several ways to improve how Facebook handles violence, but these improvements clearly should not entail more censorship.

Sources

  1. Aiken, M. 2017. . Web.
  2. Wu, S. 2017. Examining the antecedents of online disinhibition. Information Technology & People, 30(1), p. 189–209.
  3. Hobbs, W. R. 2018. How sudden censorship can increase access to information. American Political Science Review, 112(03), p. 621–636.
  4. Goode, E. 2017. Moral Panic. In: C. J. Schreck (Ed.), The Encyclopedia of Juvenile Delinquency and Justice (pp. 1–3). Hoboken, NJ: John Wiley & Sons Inc.
  5. Markey, P. M. 2017. Teaching us to fear: The violent video game moral panic and the politics of game research. American Journal of Play, 10(1), p. 99-115.
  6. Newcomb, A. 2017. Cleveland shooting highlights Facebook’s responsibility in policing depraved videos. Web.
  7. Myers West, S. 2018. Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11), p. 4366-4383.
  8. Dwoskin, E. 2019. . Web.
  9. Newton, C. 2019. The secret lives of Facebook moderators in America. Web.
  10. Constine, J. 2019. . Web.
Print
Need an custom research paper on Streaming Murder on Facebook Live written from scratch by a professional specifically for you?
808 writers online
Cite This paper
Select a referencing style:

Reference

IvyPanda. (2022, May 11). Streaming Murder on Facebook Live. https://ivypanda.com/essays/facebook-live-killings/

Work Cited

"Streaming Murder on Facebook Live." IvyPanda, 11 May 2022, ivypanda.com/essays/facebook-live-killings/.

References

IvyPanda. (2022) 'Streaming Murder on Facebook Live'. 11 May.

References

IvyPanda. 2022. "Streaming Murder on Facebook Live." May 11, 2022. https://ivypanda.com/essays/facebook-live-killings/.

1. IvyPanda. "Streaming Murder on Facebook Live." May 11, 2022. https://ivypanda.com/essays/facebook-live-killings/.


Bibliography


IvyPanda. "Streaming Murder on Facebook Live." May 11, 2022. https://ivypanda.com/essays/facebook-live-killings/.

Powered by CiteTotal, automatic citation maker
If you are the copyright owner of this paper and no longer wish to have your work published on IvyPanda. Request the removal
More related papers
Cite
Print
1 / 1