Censorship online by Big Tech is a bad idea, in large part because it’s a distraction from the problem of how social media companies promote, spread and amplify harmful information, according to author Peter Pomerantsev.
“It’s ridiculous to think that you can regulate the billions of things people say every day, or that we should, or that it’s even feasible. So I don’t think that’s the way forward,” Pomerantsev said in an interview on “The Long Game,” a Yahoo News podcast. “There’ll be a way to get out of the whole tricky thing of taking one comment down or leaving it up.”
The way out, he said, is through forcing the tech companies to be transparent about how they are manipulating the spread of information, and holding them accountable to prevent public harms.
Pomerantsev is a Russian-born journalist now based in London whose parents were hounded by the KGB secret police in Soviet Russia. His book “This Is Not Propaganda: Adventures in the War Against Reality” argues that phrases like “freedom of expression” have been hacked by authoritarian leaders and governments like Vladimir Putin in Russia and Rodrigo Duterte in the Philippines.
Authoritarians “use freedom of speech as an excuse to spread massive amounts of disinformation at the click of a button, while employing online mobs and troll farms to drown out and intimidate critical voices and obscure truth. This constitutes a sort of censorship through noise,” Pomerantsev and two others wrote in a recent article for the London School of Economics’ Institute of Global Affairs, where he is a visiting senior fellow.
But countering autocrats doesn’t have to mean removing the posts of ordinary people or taking them off their preferred social media platforms, he said, which has become a growing concern among many Republicans.
“We thought that for a long time, the federal government is infuriating,” Tucker Carlson said on Fox News Wednesday. “The bigger threat to your family turned out to be huge publicly held corporations, particularly the tech monopolies.”
In fact, focus on censorship and “cancel culture” actually distracts from solving the problem of disinformation — and all the chaos and confusion and real-world harm it brings with it — in a way that preserves free speech, Pomerantsev said.
“A lot of the virality is amplified artificially. That’s kind of how a lot of these platforms were designed,” he said. “That kind of artificial amplification I think really has to end.
“Fake amplification — everything from gaming algorithms and search engine optimization through to amplification through coordinated inauthentic activity — I think that probably has to end if the internet is going to be a just reflection of society and not this kind of weird funhouse mirror that distorts everything,” Pomerantsev said.
One of the first steps toward reducing disinformation is algorithm transparency: revealing how the social media and Big Tech companies engineer which information rises to the top and is seen by large numbers of people. Google, Facebook and TikTok have all taken some recent steps in this direction, Axios reported this week, but it was voluntary and most experts think this issue needs to be overseen by government regulators.
“When Trump’s people would say, ‘Google pushes conservative views right down, liberal news up,’ we don’t know” because Google has not shown anyone its formulas that shape search results, Pomerantsev said. “That’s ridiculous.”
Carlson addressed the same root cause on his show. “Twitter refuses to release data on who it bans,” he said.
Rep. Tom Malinowski, D-N.J., and Rep. Anna Eshoo, D-Calif., sent letters to Facebook, YouTube and Twitter in late January “urging the companies to address the fundamental design features of their social networks that facilitate the spread of extreme, radicalizing content to their users.” The letters were co-signed by 38 other House Democrats.
The lawmakers drew a straight line between the focus of social media companies on “maximizing user engagement” and the assault on the U.S. Capitol on Jan. 6 by Trump supporters who believed the former president’s lies about the 2020 election.
“The rioters who attacked the Capitol earlier this month were radicalized in part in digital echo chambers that these platforms designed, built, and maintained, and that the platforms are partially responsible for undermining our shared sense of objective reality, for intensifying fringe political beliefs, for facilitating connections between extremists, leading some of them to commit real-world, physical violence,” Malinowski and Eshoo wrote.
The lawmakers cited a Wall Street Journal investigation from last May that revealed Facebook knew in 2018 that its algorithms sometimes radicalized its users, but did not take action to reduce this because it would reduce profits. “Our algorithms exploit the human brain’s attraction to divisiveness,” a presentation created internally said, noting that the company was serving “more and more divisive content in an effort to gain user attention and increase time on the platform.”
Malinowski and Eshoo have proposed a change to Section 230 of the Communications Decency Act — a law targeted for reform by conservatives as well — that would hold tech companies “accountable for content they proactively promote for business reasons, if doing so leads to specific offline harms.”
Malinowski said in a hearing this week that this is a solution that Republicans and Democrats should be able to agree on. “We can believe that the biggest problem is on the right, on the far right or on the far left — it doesn’t matter. We can debate that. Whichever of those things you believe you should be for this, because the mechanism works the same way. It pushes people on the left further left. It pushes people on the right further right, until they reach an extreme.”
Pomerantsev pointed to the United Kingdom’s approach, which says — in his words — that “companies have to think about the harms they cause, and those harms could be around public health or some forms of personal abuse.”
“And the question is what are the companies doing — almost like in a health and safety kind of regime — to mitigate that? So are their algorithms making it too easy for people to bully others or to harass them?” Pomerantsev said. “Are the way their systems are designed making it too easy to spread this information that’s dangerous to people’s health?”
The British have said “there needs to be a regulator that’s making a judgment about whether they’re doing enough around those issues,” and are working to set up a system in which Ofcom, its communications regulator, could issue fines if the companies are found at fault.
The tech companies have lobbied the British government against giving Ofcom punitive regulatory powers.
But as Pomerantsev wrote in his book and expounded on in his interview with Yahoo News, the Big Tech companies have acquired so much information about their users — which is most people — that there is a real question about whether they are infringing upon freedom of thought.
To some degree “our private thoughts, creative impulses, and senses of self are shaped by information forces greater than ourselves,” he wrote in “This Is Not Propaganda.”
“Are they actually invading your freedom of thought? Are they actually crossing the line of you, and then using it against you?” he said. “What is that line of our unconscious that deserves to be protected?”
Read more from Yahoo News: