By Kalev Leetaru
Social media content moderation today operates much like fact checking: identifying problematic content long after it has gone viral and done its damage. In specific narrow domains like terrorism, companies have adopted blacklists of previously identified material, but in terms of proactively preventing new illegal and harmful content from being posted in the first place, the companies have largely struggled. As the major social platforms push aggressively forward towards automation, will our algorithmically moderated future save us or merely be the first step leading us towards 1984?
The idea of using a combination of known content blacklists and algorithmic identification of new content is one that has shown promise for social media companies eager to stamp out harmful use of their platforms. Combining the ease of implementation and robustness of blacklists of previously identified material with algorithmic identification of new content that violates their rules would allow companies like Facebook and Twitter to proactively prevent harmful content from being shared in the first place.
At first glance, automated moderation would seem the perfect answer to an increasingly toxic online world. Hateful and dehumanizing speech, terrorism content, human trafficking and exploitation, calls for violence and all other forms of horrific material could be prevented from ever being shared in the first place, rather than merely deleted long after they’ve caused irreparable harm.
The problem lies in how to define just what constitutes “unacceptable speech.”
Within the United States, the First Amendment’s guarantees of freedom of speech sprung out of a world in which the right of a citizen to express their opinion on socially divisive issues, especially criticism of their government or “disloyal” speech, was heavily circumscribed throughout the world. Europe had increasingly cracked down on free speech and it was a few years later that Britain’s famous “Reign of Terror” severely restricted the right of expression within the country.
Even with the First Amendment, the early years of the nation’s history saw many attempts to curtail freedom of speech. The Sedition Act of 1798 famously banned most criticism of the US Government and was used to jail prominent journalists who dared to question their government.
Over the decades there have been many subsequent attempts to roll back these sacrosanct protections, each short-lived, as society determined time and again that the right to speak any thought, even if the government and society deemed it harmful, such as criticism of elected officials, was more important than the neatness and orderliness of government proscribing appropriate topics and beliefs for the public.
The centralization of the digital world into a handful of private walled gardens not subject to First Amendment protections has rendered two centuries of societal debate moot.
For-profit companies now set acceptable speech rules for the entire world. Moreover, those rules are designed not for user safety but to maximize profits and yield the neocolonialist forcible export of Silicon Valley’s interpretation of American values to the rest of the world’s cultures.
What happens when social media companies deploy silent black box algorithms to monitor not only our public posts, but our most sensitive, intimate and private conversations amongst our friends, families and neighbors?
In a world not that distant from 1984’s dystopia, social companies are rushing to install algorithmic moderation that would monitor our every word and delete any nonconforming views before we have the chance to share them. More importantly, Facebook recently noted the importance of it being able to log every violation of its thought rules and archiving a copy of the offending speech.
As censorship moves from a post facto review process towards active deletion, speech that social platforms disagree with will no longer even have brief moments of circulation, but instead will be prevented from ever seeing the light of day in the first place.
As Facebook reminded us this past March, that can include silencing debate about its business practices, including calls for regulation or greater data privacy laws.
Eventually these rules won’t apply only to public posts but instead will be used to censor our private phone calls and in-person conversations.
Most troubling, however, is the fact that these speech rules are not being democratically determined. There is no national vote to decide what beliefs are acceptable and which will be deleted into the memory hole or warrant permanent banishment from the digital realm into “unperson” status.
Instead, a handful of executives at private for-profit companies now wield absolute power over what societies across the globe may speak about.
As governments awaken to this immense new power of censorship it is inevitable that social media companies will face a deluge of lawful court orders requiring them to add all manner of speech to their banned lists, especially criticism of government officials.
Most importantly, unlike today where questionable moderation laws tend to find their way to the press through whistleblower leaks, in an algorithmic future, speech will be determined by black boxes that can encode any manner of frightening rules without public inspection.
The U.S. Government could easily require Facebook to ban all criticism of the administration with a single silent court order, with only a few trusted government officials and a handful of Facebook engineers ever aware the order even exists. When such rules are encoded in software, there are far fewer opportunities for leaks or public awareness.
Even if the rule was eventually leaked, if the companies themselves control our online speech, they could simply ban all mention of the rule itself, silencing debate and preventing others from even becoming aware of the situation.
Facebook did not respond to a request for comment on how it plans to safeguard its algorithmic moderation from government intervention, while both Facebook and Twitter declined to commit to permitting external evaluation of their algorithms.
Putting this all together, algorithmic moderation might at first glance seem like the ultimate solution to all of the online world’s evils, but in reality we must ask whether it is merely the first step towards 1984.