Trump, platforms, and the arbiters of truth
Illustration by Ioanna Giannakopoulou
Recently, two separate things reignited the discussion of content moderation of online platforms. First, on May 26, the President of the United States, Donald Trump, tweeted misleading and inaccurate claims about the 2020 US Presidential Election and the voting by mail process. After a public outcry, Twitter decided to label Trump’s tweet as suspicious, urging users to “get the facts about mail-in ballots”. This was not the first time that social media platforms decided to take action against political leaders. With the coronavirus pandemic, social media networks have been under pressure to combat misinformation. For example, Venezuela’s President, Nicolás Maduro, and Brazil’s President, Jair Bolsonaro, had had posts taken down from Twitter, Facebook and Instagram for spreading misleading information. Second, on May 25, George Floyd was brutally murdered by police officer Derek Chauvin in Minneapolis (Minnesota).
While tensions were escalating across the US, as well as all over the world, Trump went on his favourite platform, Twitter, to attack the protesters. He called them “thugs” and warned them that “when the looting starts, the shooting starts,” citing a Miami police chief, who in the 1960s was promoting police brutality against the black community. This time Twitter went one step further. After years of leaving Trump’s tweets unchecked, it decided to hide his tweet because it “glorified violence”. Meanwhile, Facebook did nothing, angering many of its employees and partners. After the severe backlash, Facebook’s CEO, Mark Zuckerberg, pledged to re-examine certain rules in relation to content moderation. More promises, as always.
We should not forget how online content moderation often works: in the background, silently, by real people who check on reported posts, that are underpaid and do not have the necessary expertise. These people often suffer from traumas because of the horrible things they have to see every day, without adequate psychological support or sufficient work benefits. Social media giants do not want you to see them, so they outsource them to third-party companies, often in countries with cheap labour force, in typical capitalist manner. The COVID-19 crisis exposed how seriously flawed this “business” is: we literally paid attention to these people once posts were wrongfully flagged as spam, due to lack of human supervision and increased reliance on Artificial Intelligence. These workers were not characterised as essential workers, though their work apparently is. Some workers in the US benefitted from positive decisions, but the same cannot be said for those who work overseas, like in Greece, where a major Facebook partner severely mistreated its employees.
Facebook recently announced 20 members of its new Facebook Oversight Board, something like an expert group of moderators. The Board (which could definitely be a book by Kafka), is allegedly independent from Facebook but funded with $130m by a company’s trust. It will offer policy recommendations to Facebook regarding content moderation and, in very specific high-profile cases, decide which content should be allowed and which not. So, it does not challenge Facebook’s power but rather acts as a way to legitimise its actions, to show that they care. It’s an exercise in self-regulation. In any case, it seems that Zuckerberg is contradicting himself by funding what is essentially a fact-checker. Also, among the Board’s members, there is Emi Palmor, former director-general of Israel’s Justice Ministry, who had set up an online task-force that monitored and censored Palestinian social media posts. Not very promising either.
Let’s go back to our Trump v Twitter story. What happened with Trump’s second tweet is substantially different from any previous case for two reasons: on the one hand, Twitter defined a piece of content as “glorification of violence,” which is an inherently political thing to do, and, on the other hand, Trump was the ‘victim’. Note that the decision was taken by Twitter’s “Trust and Safety Council” and not a public-relations or, even, policy team. So, Twitter felt that this tweet put part of its user-base in danger. Trump responded with a tantrum: he signed an Executive Order, demanding the review of the 1996 law and, specifically, of the Section 230 of the Communications Decency Act that protects intermediaries from being sued for content hosted on their platforms; though this will most likely not stand up in court. Trump has thrived on blasting social media as biased against the Republicans and as part of the ‘establishment’, so it is no wonder that he would do something like that. It is all part of his strategy. Especially at a time when the US has had more than 100,000 deaths due to the new coronavirus and its economy already in recession.
Following suit, Snap, the parent company of Snapchat, removed Donald Trump’s account from the “Discover” feed for the same reasons. Facebook’s inaction towards Trump’s post has been credited to a personal decision of Facebook’s chief. Zuckerberg seems to be playing Trump’s game. Why? Well, for starters, he wanted to appear as unbiased in the eyes of conservatives. To make sure his message was clear enough, he went to FOX News, Republicans’ favourite channel, and said that platforms should not be “arbiters of truth,” indirectly blasting Twitter’s decision.
But this could not be further from the truth, and Zuckerberg knows it too. He is just avoiding responsibility and pretending to be neutral. But, as almost everything in life, platforms are not neutral; in fact, they are very political. In addition, Trump has been steadily pouring money to Facebook’s advertising machine for years, making his campaign one of the most effective in the history of digital advertising. The way the Facebook’s advertising system works, favors large and consistent spenders like Trump, while allowing the targeting of specific audiences, a technique called “micro-targeting”. In fact, Facebook is the only tech giant that has not taken any effective action against political advertising, while Twitter has banned all political ads and Google now forbids candidates from specifically targeting voters.
Earlier this year, Zuckerberg, and other Silicon Valley CEOs, called for more regulation, saying that private companies should not have too much power over fundamental democratic issues. Of course, they do not wish for strict regulations but rather to pass their own terms. Germany, and very recently France, have adopted national laws that intervene in how platforms moderate content, allowing users and the police to flag content as “potentially harmful”. In Germany, the “Netzwerkdurchsetzungsgesetz” act (“NetzDG” for short) passed in 2017, demands platforms to remove such content within 24 hours, with a penalty of up to €50m if they do not comply. It was also recently amended to demand platforms to be explicit about moderation decisions. In France, the infamous “Lutte contre la haine sur internet” (or “Loi Avia” inspired by LREM’s deputy Laëtitia Avia) was finally voted on May 13, although it has been heavily criticised as a danger to freedom of expression by many civil groups, experts, and the EU Commission. Both of these laws bear the risk of pushing platforms to “over-censor”, as there is not any penalty for that, but many incentives for the opposite.
The truth is that nobody wants to take the responsibility of defining what constitutes hateful speech, though everybody seems keen to fight it. And this is reasonable because otherwise we would indeed be greatly risking our right to freedom of expression. But the same could be argued for an uncontrolled laissez-faire approach. The European Commission, for example, recommends self-regulation as the most effective approach in combatting online hate speech, but as shown in the case of Trump’s post on Facebook, the interests of businesses rarely align with what might be common good.