In a Senate Judiciary Committee hearing yesterday, there was a striking change of scenery—rather than grilling the floating heads of Big Tech CEOs, senators instead questioned policy leads from Twitter, Facebook, and YouTube on the role algorithms play in their respective platforms. The panel also heard from two independent experts in the field, and the results were less theatrical and perhaps more substantive.
Both Democrats and Republicans expressed concerns over how algorithms were shaping discourse on social platforms and how those same algorithms can drive users toward ever more extreme content. “Algorithms have great potential for good,” said Sen. Ben Sasse (R-Neb.). “They can also be misused, and we the American people need to be reflective and thoughtful about that.”
The Facebook, YouTube, and Twitter execs all emphasized how their companies’ algorithms can be helpful in achieving shared goals—they are working to find and remove extremist content, for example—though all the execs admitted de-radicalizing social media was a work in progress.
In addition to policy leads from the three social media platforms, the panel heard from a couple of experts. One of them was Joan Donovan, director of the Technology and Social Change Project at Harvard University. She pointed out that the main problem with social media is the way it’s built to reward human interaction. Bad actors on a platform can and often do use this to their advantage. “Misinformation at scale is a feature of social media, not a bug,” she said. “Social media products amplify novel and outrageous statements to millions of people faster than timely, local, relevant, and accurate information can reach them.”
One of Donovan’s proposed solutions sounded a lot like community cable TV stations. “We should begin by creating public interest obligations for social media timelines and newsfeeds, requiring companies to curate timely, local, relevant, and accurate information.” She also suggested that the platforms beef up their content moderation practices.
The other panel expert was Tristan Harris, president of the Center for Humane Technology and a former designer at Google. For years, Harris has been vocal about the perils of algorithmically driven media, and his opening remarks didn’t stray from that view. “We are now sitting through the results of 10 years of this psychologically deranging process that have warped our national communications and fragmented the Overton window and the shared reality we need as a nation to coordinate to deal with our real problems.”
One of Harris’ proposed solutions is to subject social media companies to the same regulations that university researchers face when they do psychologically manipulative experiments. “If you compare side-by-side the restrictions in an IRB study in a psychology lab at a university when you experiment on 14 people—you’ve got to file an IRB review. Facebook, Twitter, YouTube, TikTok are on a regular, daily basis tinkering with the brain implant of 3 billion people’s daily thoughts with no oversight.