The more outrageous the lie, the better it is for Facebook’s bottom line

By Ellen P. Goodman and Karen Kornbluh

Los Angeles Times

Mark Zuckerberg, Facebook’s CEO, has an easy justification for his decision to run political ads on the platform no matter what lies they tell. He doesn’t want to play censor, so it’s up to the public to decide what’s true and what’s false.

Twitter took a different path. Rather than promote or police false claims in political ads, it decided to ban them all. Jack Dorsey, Twitter’s CEO, tweeted that he had to act given the “entirely new challenges to civic discourse” presented by online political ads because of “machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes.”

Dorsey at least acknowledged the unprecedented power of digital platforms to monetize speech that deceives, divides and weakens democratic discourse. But his solution won’t work.

In the week since Twitter banned political ads, we have already seen that slack definition and uneven enforcement may be as bad as Facebook’s do-nothing approach. If Twitter continues to define “political” to include issue ads on controversial topics — versus “electioneering” ads that advocate for the election or defeat of a candidate in a political campaign — the resulting morass will be predictable. Critics predict that ads promoting fossil fuels will not be considered “issue ads.” But ads promoting responsible climate policy will be. So big fuel will get paid reach while green advocacy gets deep-sixed.

What to do then? The platforms want to make this debate about free speech, not about how their algorithms and use of personal data amplify speech. The conversation they want to avoid is about how they make money. That’s why it’s important to focus on their structure, not only on pieces of content.

Digital platforms try to engage users with their services for as long and as intensively as possible. This lets them sell ads and gather personal data, which then generate more value. It turns out that lies generate outrage and fear and these emotions generate engagement. So as long as a platform’s financial returns align with outrage, it is optimized for information rubbish. It’s difficult to stop the dissemination of bad information, consistent with free speech values. But what we can do is check the dominance of platforms that profit from misinformation and empower users to defend against it.

Political advertisers — like pretty much all advertisers — have to buy from Facebook. The ads they run are not like broadcast TV and radio ads. Rather, they can be microtargeted to very small segments of the public, often those most susceptible to conspiracy theories or fearmongering. These platforms take advantage of what Jonathan Albright has called “data-driven ‘psyops’” that can “tailor people’s opinions, emotional reactions, and create ‘viral’ sharing.”

In fact, the platforms know that the messages that get the most engagement are those provoking outrage and emotional responses. Facebook’s own data show that messages get more reach the closer they get to crossing the platform’s outer bounds of what’s allowed in terms of hate speech, violence and the like. In other words, the more hateful or scary the message, the quicker and further it spreads.

Users don’t always know what is content produced by trusted sources, what is a sponsored ad and what is bot-produced. They may not know who is speaking to them, and yet many will share and spread the messages just the same.

Given the nature of social platforms, there are no obvious or clear solutions to curbing disinformation operations. Still, there are steps that would promote a healthier online sphere, all of which require structural, not content-based, regulation: Limit microtargeting for election ads, require more transparency around political ad spending and audience targeting, and encourage more competitive pressure on platforms.

We need some combination of federal privacy and advertising law to constrain the ability of platforms to target users with election ads by using their personal data. It’s long past time to limit the way these companies collect and deploy personal information in general. But this is especially true when it comes to election-related communications —politicians should not be able to use personal data about our behaviors and interests to manipulate potential voters without accountability.

It’s also time to make digital platforms live by the disclosure rules that apply to TV and radio broadcasters. Neither Facebook’s ad labeling nor ad archive disclosures tell people who is really paying for ads or how the ads are targeted. Fake companies can pop up just to advertise on social media without revealing who is really funding the ads. New York University researchers, for example, found that “Citizens for Tobacco Rights” sponsored hundreds of ads on Facebook. The ad disclaimer would not clue anyone to the fact that this entity, which seems to exist just to place ads, was operated by tobacco company Philip Morris.

Just as broadcasters must maintain public files on political ad buys, the digital platforms should have to provide transparency. The proposed Honest Ads Act would bring digital media in line with TV and cable political advertising rules.

The arguments about Facebook’s ad policies are so fraught because the company is so powerful. Digital platforms are the new media gatekeepers, with 55% of Americans getting news from social media often or sometimes. Yet these platforms bear none of the obligations to the public that legacy media shoulder.

Regulation was key to fostering competition in the telecommunications industry and should be here as well. Let people easily port their social media data to other social platforms and use other services interoperably with the Facebook platform. That won’t stop all disinformation campaigns, but it might give users more power in resisting them and denying other platforms the breathing room to develop alternative business models.

Ellen P. Goodman is a professor at Rutgers Law School and Karen Kornbluh is director of the Digital Innovation and Democracy Institute at the German Marshall Fund.