Should Social Media Companies Do More to Combat Disinformation and Extremism on Their Platforms? | Teen Ink

Should Social Media Companies Do More to Combat Disinformation and Extremism on Their Platforms?

May 27, 2022
By Anonymous

Social media has emerged as a  powerful tool for communication, connection, and community. It’s created new, highly accessible channels for spreading information, different opinions, ideas, and thoughts. The impact social media has on real-world communities is complex and rapidly evolving. It stretches across international borders and has a lot of influence on a global scale. However, this brings a problem. There is a lot of misinformation, disinformation, and radical ideas on the internet, and people argue that social media sites should censor extremism and misinformation. In contrast, others say that it defeats the purpose of a “free speech” platform. So this begs the question, “Should Social Media Companies Do More to Combat Disinformation and Extremism on Their Platforms?”

Naturally, It's hard to track down people putting out dangerous info on social media. Social media is challenging for the same reason it is beneficial (simple news sharing). The pro side believes that social media platforms need to do more to censor misinformation and extreme ideas. They think that  “social media companies have chosen profit over responsibility” and that “It is not a violation of free speech for private companies to moderate content.” (ProCon.org). They also believe that “ companies doing more to combat disinformation and extremism on their platforms argue that the same algorithms and features that have made Facebook, Twitter, and other sites so engaging have also created forums ripe for abuse.”  (Infobase)  They say that if social media corporations control information, they should be held accountable for everything put on their platforms and censor anything extremely radical. 

However, some people believe that social media companies should allow everything put up because it is free speech. Opponents say that there is no point in calling their platform a place for free speech when some opinions and things people express get sidelined and censored. They say that it is a bit silly that we now live in a world where four or five companies, “all of which are unaccountable”, have the monopoly power to decide whether or not people will be erased from any digital platform. Opponents argue that allowing a small group of individuals in charge of prominent social media platforms to decide what defines hate speech or extremism would ultimately lead to human bias and the unfair censorship of politically unpopular views.

In my opinion, I feel there should be some sort of restriction on what people post, but 

social media platforms should only censor content that is the most logically sensible because, speaking from personal experience, I have come across some very explicit stuff myself, and they don't follow the platforms ToS. Platforms need to censor, for example, previously mentioned sexually explicit content and dangerous personal information, but another problem arises. For instance, during The pandemic, Companies including Facebook, Twitter, and Google began deleting posts containing inaccurate or misleading information about the coronavirus in the context of a worldwide public health emergency, which forced them to make judgements about real and fake news. Except as authorised by the Constitution, the government should not be able to restrict freedom of speech on social media or anywhere else. Someone who threatens to kill their lover on Facebook, for example, might be charged with criminal threats.

To conclude, the question is whether There's a sweet spot between freedom and retaliation is whether.  How could these platforms reach the best balance? It is vital for users to be allowed to use social media as a haven where they will not be slandered for expressing their opinions as long as they do not endanger others. Isn't it true that it's easier said than done? Social media platforms must reach a sensible compromise to achieve this successfully, providing users with a secure space to voice their opinions without fear of backlash or slander. Many other countries, even Western democracies, have weaker free speech protections than the United States and have adopted far more aggressive measures to counter hate speech and misinformation online. In France, for example, judges can require false news to be removed from the internet during election campaigns. In 2017, Germany passed a law allowing social media sites to remove unlawful, racist, or slanderous remarks within one day or risk millions of euros in fines. 

 

 

 

 

Works Cited

Corps, Mercy. "The Weaponization of Social Media." Mercycorps.org, Nov. 2019, www.mercycorps.org/sites/default/files/2020-01/Weaponization_Social_Media_Brief_Nov2019.pdf. Accessed 16 May 2022.

Ortner, Daniel. "Government Regulation of Social Media Would Kill the Internet — and Free Speech." The Hill, 12 Aug. 2019, thehill.com/opinion/technology/456900-government-regulation-of-social-media-would-kill-the-internet-and-free/. Accessed 12 May 2022.

ProCon.org. 29 June 2020, socialnetworking.procon.org/. Accessed 13 May 2022.

Reputation Defender. "Top Five Social Media Privacy Concerns." ReputationDefender.com, 12 Jan. 2022, www.reputationdefender.com/blog/privacy/top-five-social-media-privacy-concerns. Accessed 27 May 2022.

"Should Social Media Companies Do More to Combat Disinformation and Extremism on Their Platforms?" Issues & Controversies, Infobase, 26 Apr. 2022, icof.infobase.com/articles/QXJ0aWNsZVRleHQ6MTY0OTY%3D?articleType=PRO_CON_ARTICLE&q=Social+media#Additional%20Sources. Accessed 12 May 2022.

Zhuanlan.zhihu.com. "Should Social Media Platforms Regulate the Content That Users Post? Should They Have To?" Zhuanlan.zhihu.com, 19 Apr. 2021, zhuanlan.zhihu.com/p/366179338#:~:text=In%20conclusion%2C%20social%20media%20platforms,the%20content%20that%20users%20post. Accessed 18 May 2022.



Similar Articles

JOIN THE DISCUSSION

This article has 0 comments.