The falling dominoes: on tech-platforms recent user moderation

Written by Farzaneh Badiei


Tech-platforms in the past week had to remove and ban someone because of inciting violence. That person happened to be the President of the United States. It is hard to argue that tech-platforms did not see this coming since they deal with all kinds of other harmful behavior on their platforms. It is also hard to argue that tech-platforms had a governance structure in place to address the problem. The reason for this unpreparedness is that these platforms don’t only moderate content, they moderate users but they still use content moderation techniques that are not built to deal with users.

The recent spate of take-down and bans by tech-platforms is a testament to the fact that in governing their users’ behavior, tech-platforms have to go beyond content moderation. After tech-platforms banned Trump and some of his acolytes, it became clear that their content moderation techniques are not sufficient to build legitimacy and trust.

Users’ perceptions of tech-platforms are very important. Political leaders and others can use tech-platforms to affect people's behavior and incite violence. But the way tech-platforms deal with such behavior online is also a determining factor on how the users are going to behave in the future.

If the users trust a platform and perceive its processes as legitimate, those users are more likely to accept a decision by the platform even if they don’t agree with it. That certainly did not happen in the recent events. Instead, we saw what we might call “safety theatre”[1]. We saw top-down measures that removed harmful, violence-inciting content and people. We did not see measures through which platforms tried to respond to the aggrieved parties (those who thought it was unfair to remove their President from the platform). It was not clear how the platforms moderated the users. Using only content moderation techniques with no clear governance structure is like theatrical solutions we often see at the airport security: likely to be ineffective but visible.

To go beyond content moderation, platforms should build governance structures that can proactively create trust and legitimacy. Governance is not just due process or 100 page community standards of behavior. Governance is the necessary structure that helps build communities which, combined with fairness, can bring trust to a platform.

Finally, it is important to look at the interrelation of different tech-platforms and consider their actions collectively and not individually. We have been debating at which layer it is appropriate to do content moderation. But I think we should look at the issue more holistically.  From the outside, tech-platforms (located in various layers of the Internet) appeared to have a domino effect on one another.  They used similar methods and processes for the same goal: removing Trump and his supporters. Such a domino effect can threaten the presence of certain people on the Internet. Therefore, actions should be taken proportionally and with a fair governance structure in place that is appropriate for its respective layer of the Internet.


  1. Bruce Schneier wrote an essay about security theatre which “…refers to security measures that make people feel more secure without doing anything to actually improve their security.” The term safety theatre in this essay was inspired by that essay. https://www.schneier.com/essays/archives/2009/11/beyond_security_thea.html

Previous
Previous

The Justice Collaboratory 2022 Theme: Community Vitality

Next
Next

SMGI 2020 Recap & Looking Ahead