Point: Regulate social media business models to mitigate harmful speech

Commentary

By Yosef Getachew

InsideSources.com

On Jan. 6, a violent mob of insurrectionists stormed the U.S. Capitol in an attempt to overturn our country’s 2020 presidential election. The attack, which resulted in the death of five people, was fueled by a constant stream of disinformation and hate speech Donald Trump and other bad actors flooded across social media platforms before, during and after the election. Despite their civic integrity and content moderation policies, platforms have been slow or unwilling to take action to limit the spread of content designed to disrupt our democracy.

This failure is inherently tied to platforms’ business models and practices that incentivize the proliferation of harmful speech. Content that generates the most engagement on social media tends to be disinformation, hate speech and conspiracy theories. Platforms have implemented business models designed to maximize user engagement and prioritize their profit shares over combating harmful content.

While the First Amendment limits our government from regulating speech, there are legislative and regulatory tools at its disposal that can rein in social media business practices bad actors exploit to spread and amplify speech that interferes with our democracy.

The core component of every major social media platforms’ business model is to collect as much user data as possible, including characteristics such as age, gender, location, income and political beliefs. Platforms then share relevant data points with advertisers for targeted advertising. It should come as no surprise that disinformation agents exploit social media platforms’ data-collection practices and targeted advertising capabilities to micro-target harmful content, particularly to marginalized communities. For example, the Trump campaign used Facebook to target millions of Black voters with deceptive information to deter them from voting.

Comprehensive privacy legislation, if passed, can require data minimization standards, which limit the collection and sharing of personal data to what is necessary to provide service to the user. Legislation can also restrict the use of personal data to engage in discriminatory practices that spread harmful content such as online voter suppression. Without the vast troves of data platforms collect on their users, bad actors will face more obstacles targeting users with disinformation.

In addition to data-collection practices, platforms use algorithms that determine what content users see. Algorithms track user-preferences through clicks, likes and other forms of engagement. Platforms optimize their algorithms to maximize user engagement, which can mean leading users down a rabbit hole of hate speech, disinformation and conspiracy theories. Algorithms can also amplify disinformation as conspiracy theorists used the “stop the steal” moniker across social media platforms to organize and mobilize offline violence.

Unfortunately platform algorithms are a “black box” with little known about their inner workings. Congress should pass legislation that holds platform algorithms accountable. Platforms should be required to disclose how their algorithms process personal data. Algorithms should also be subject to third-party audits to mitigate the dangers of algorithmic decision-making that spreads and amplifies harmful content.

Federal agencies with enforcement and rule-making capabilities can apply their authority to limit the spread of harmful online speech that results from platform business practices. For example, the Federal Trade Commission can use its enforcement power against unfair and deceptive practices to investigate platforms for running ads with election disinformation despite having policies that prohibit such content. The Federal Election Commission can complete its longstanding rule-making to require greater disclosure of online political advertisements in order to provide greater transparency as to what entities are trying to influence our elections.

Outside of legislative and regulatory processes, the Biden administration should create a task force for the internet, consisting of representatives from federal, state and local governments, business, labor, public interest organizations, academia and journalists. The task force would identify tools to combat harmful speech online and make long-term recommendations for an internet that would better serve the public interest.

There is no silver bullet solution to eliminating disinformation, hate speech and other harmful online content. In addition to the policy ideas, federal lawmakers must provide greater support for local journalism to meet the information needs of communities.

But social media companies have proven that profits are more important to them than the safety and security of our democracy. Federal lawmakers and regulators must enact policies as part of a holistic approach to hold social media platforms accountable for the proliferation of harmful and false content. The insurrection on Jan. 6 revealed that our democracy may just depend on that accountability.

Yosef Getachew is director of the Media & Democracy Program for Common Cause. He wrote this for InsideSources.com.