Why deplatforming Trump is no atonement for Silicon Valley's sins abroad

Why deplatforming Trump is no atonement for Silicon Valley's sins abroad
Comment: Debate over deplatforming Trump misses the point: That from India to Myanmar, Silicon Valley has failed to police hate speech for years, writes Jillian C. York.
5 min read
15 Jan, 2021
Twitter, Facebook, YouTube, Snapchat, Instagram and others have all banned or restricted Trump's activity [Getty]
As US President Trump's (second) impeachment was voted through this week, a large number of conservatives have also been removed by social media platforms - particularly Twitter. Though it's clear from their statements that many of them lack a rudimentary understanding of how the law works, that hasn't stopped members of Congress from loudly proclaiming that they're victims of censorship.

Trump is by no means the first politician to be deplatformed by a tech company. In 2019 - and again last week - Twitter suspended Iran's Ayatollah Khamenei first for a threatening tweet, and later for spreading disinformation about Covid-19 vaccines. Both suspensions were temporary.

More troublingly, Facebook bans Lebanese political party Hezbollah (which is designated a foreign terrorist organisation by the United States) from its platforms, while simultaneously allowing political parties in the country with similarly violent histories to post unimpeded - the result is, effectively, a US-based company meddling in the politics of a sovereign nation.

While the removal of these individuals (the president included) by platforms isn't censorship in the legal sense, the fact is that social media platforms exercise an extraordinary amount of power over our expression. And while the president of the United States has plenty of other places to express himself, the ordinary user - and in particular, more vulnerable individuals including activists, dissidents, LGBTQ youth, journalists, and others - often lack other platforms for expression, particularly if their governments are also engaged in censorship.

There is more Facebook should be doing to limit the spread of hate speech

But the laser focus on the impact of deplatforming President Trump, and the argument that the decision to do so "sets a precedent" - also misses another key point that has been raised by digital rights experts from outside of the United States and Europe; that is, the fact that platforms give far less scrutiny to the speech of politicians abroad.

Perhaps the most troubling example of this comes from Myanmar where, despite warnings as early as 2013 that their platform was being used by politicians and other public figures to foment violence, Facebook failed to act for years, only doing so after a United Nations investigation and subsequent Reuters report accused the platform of contributing to the ongoing genocide in the country.

Translation: "We have always asserted that
our Lebanese belonging is 
superior to any other,
and we have said that our superiority is genetic,

which explains our similarities and differences,
our adaptability, fluidity and rigidity on one side,
and our complete refusal of the displaced and

the refugee on the other."

Although the company eventually removed 20 military generals and other officials in the country, advocacy groups rightly assert that there is more Facebook should be doing to limit the spread of hate speech.

Similarly, India's ruling Hindu nationalist Bharatiya Janata Party (BJP) has been accused of peddling misinformation and hate speech on Twitter, and armies of their supporters have engaged in harassment of minority communities on the platform, without penalty.

Read more:  Social media fuelled the Arab Spring, then helped dictators quash it

The Middle East is no stranger to this phenomenon. Mohamad Najem, director of the advocacy group SMEX, says that in Lebanon, politicians engaging in hateful speech on platforms "is the norm."

"Politicians at least in Lebanon built their fan base upon their sectarianism, and from this all the other norms are rooted naturally; like xenophobia, bigotry, misogyny, [and] hate speech," says Najem. "Companies really don't spend resources nor give any attention to small countries. They barely have any content moderation people from our region, and they interfere only to turn off a fire."

Indeed, as I write in my forthcoming book, "US social media companies operate in a diverse set of global communities, moderating content in around fifty languages, [but] the amount of investment in content moderation teams pales in comparison to that of engineering and development, or mergers and acquisitions."

Conversely, these companies are often all too willing to remove or locally limit speech when asked to do so by certain foreign governments. For instance, most companies regularly remove content at the behest of Turkey, including speech that would be protected under international human rights standards, and treaties such as the International Covenant on Civil and Political Rights, to which the country is a party.

Silicon Valley is effectively erasing history in Syria by removing key documentation of war crimes in the country

Furthermore, Silicon Valley tech companies are all too happy to operate in countries that demonstrate blatant disregard for human rights, including Turkey, United Arab Emirates, and Israel, among others. 

Finally, while the United States does not intervene in matters of speech at home, it is all too happy to do so abroad, in the form of restrictions stemming from sanctions as well as laws around the support of foreign terrorist organisations. While the latter may seem like a reasonable restriction to many, the actual result can be much more insidious: As researchers from the group Syrian Archive have documented, Silicon Valley is effectively erasing history in Syria by removing key documentation of war crimes in the country.

So, what should be done? While the incoming Biden administration is focused on repealing Section 230 - the law that protects platforms from liability for the speech they host - and European lawmakers aim regulations at hate speech, there are plenty of changes that platforms can make today: They should institute transparency measures and ensure that all users have a path to remedy. They should place far more resources into their moderation systems, investing in human moderators around the globe and paring back automation instituted in the wake of pandemic. And perhaps most importantly, they should broaden their engagement beyond US and European lobbyists and civil society to ensure true inclusivity in their policymaking processes.


Jillian C. York is a writer and activist whose work examines the impact of technology on our societal and cultural values. Based in Berlin, she is the Director for International Freedom of Expression at the Electronic Frontier Foundation, a fellow at the Center for Internet & Human Rights at the European University Viadrina, and a visiting professor at the College of Europe Natolin.

Follow her on Twitter: @jilliancyork

Have questions or comments? Email us at: editorial-english@alaraby.co.uk

Opinions expressed here are the author's own, and do not necessarily reflect those of her employer, or of The New Arab and its editorial board or staff.