Skip to content
POLICY

Everyone agrees: Facebook, Twitter should block disinfo—but probably won’t

2020 misinformation already hard for platforms, with November still months away.

Story text
If you're feeling extremely cynical about social media's preparedness for the rest of the madcap 2020 election season, you're in good company: A whopping three-quarters of Americans don't expect Facebook, Twitter, or other large platforms to handle this year any better than they handled 2016. That finding comes from the Pew Research Center, which polled Americans about their confidence in tech platforms to prevent "misuse" in the current election cycle. A large majority of respondents think platforms should prevent misuse that could influence the election, but very few think they actually will. Overall, only 25 percent of respondents said they were very or somewhat confident in tech platforms' ability to prevent that kind of misuse, Pew found. Meanwhile, 74 percent reported being not too confident or not at all confident that services would be able to do so. The responses were extremely similar across both Republican-leaning and Democratic-leaning respondents. A similar number, 78 percent, said technology companies have a responsibility to prevent their platforms from being misused. Here, Pew did find fairly significant differences in response—not by political affiliation or belief, but instead by age. While less than three-quarters of respondents under age 50 felt the platforms needed to step up responsibility, a striking 88 percent of seniors over age 65 replied that social media services have a duty to prevent abuse. Younger respondents were also the most likely to think that platforms could or would do something about it: 31 percent of those ages 18-29 said they were confident in tech firms to prevent election-influencing misuse. That number dropped to 26 percent among those ages 30-49, 24 percent among those ages 50-64, and only 20 percent among respondents over 65.

The 2020 trenches

We are, at long last, actually shambling through the primary election season, with Super Tuesday landing in less than a week. The trouble with 2020, though, has been known since the curtain closed on the troubled 2016 election cycle. And the challenges are both foreign and domestic. Russia's use of social media to influence the outcome of the 2016 election is by now extremely well-documented and well-known. A report (PDF) from the Senate Intelligence Committee rounded up and outlined the methods that Russia's Internet Research Agency (IRA) used to launch "an information warfare campaign designed to spread disinformation and societal division in the United States," including planted fake news, carefully targeted ads, bot armies, and other tactics. The IRA used, and uses, several different platforms, including Twitter, YouTube, and Reddit, but its primary vehicles for outreach are Facebook and Instagram. In an attempt to mitigate the harm social media can do during election season, Twitter updated its election integrity policy in April and moved to ban all political advertising from candidates starting last November. Google a short time later tightened its rules on false claims and microtargeting in political advertising. Facebook, however, is taking a different approach. The globe-spanning social network has repeatedly said its standards do not apply to politicians, and political ads can be full of lies without falling afoul of Facebook's rules. There are nominally some limits—attempting to suppress voter turnout or census participation, for example, will get your ad kicked off the service. But attempts to consistently enforce that twisting and dotted line are not going well. In lieu of prohibiting deliberately misleading content, Facebook has said the onus is on users to simply try to see less of it. Facebook does work regularly to remove what it terms "coordinated inauthentic behavior." When the platform detects a group of fake accounts trying to manipulate users, it kicks them off, posting updates several times per year about removing batches of bad accounts based in Russia, Iran, or dozens of other nations. But platforms are having a much more challenging time figuring out what to do with coordinated authentic behavior. Facebook (both for itself and Instagram) and Twitter have been given a handy case study in the form of Mike Bloomberg. As part of his campaign strategy, Bloomberg has been paying social media influencers to, well, influence—on his behalf, and without following the usual protocol for advertisements. That kind of thing is against the rules on both Facebook and Twitter. Twitter suspended 70 accounts affiliated with the Bloomberg meme machine for violating its policies, which ban users from doing things to "artificially amplify or disrupt conversations through the use of multiple accounts" or paying others to generate "artificial engagement or amplification, even if the people involved use only one account." Facebook seems less sure of how to move forward. Some Instagram accounts with millions of followers are participating in the Bloomberg ad blitz, while flipping their accounts between public and private to avoid scrutiny from the public. And when someone doesn't use the proper disclosure tools, Facebook (Instagram's parent company) doesn't really have a way to know what's going on. Facebook doesn't "have visibility into financial relationships taking place off our platforms, which is why we’ve asked campaigns and creators to use our disclosure tools," a spokesperson for the company told The New York Times. The company also apparently has not yet decided what to do about campaigns that simply ignore its process.