A Powerful New Filtering System Could Help Silence Online Abuse
Harassment in the digital age has become an issue both omnipresent and difficult to address, given the anonymity of the Internet and the relative ease with which we can ignore those with whom we don’t wish to communicate. Nevertheless, it remains a toxic and dangerous reality for millions who encounter racism, sexism, and bullying of all kinds on social media and elsewhere on the web, and many are arguing for more attention to be paid to the issue. The case of Leslie Jones – whose maltreatment by trolls on social media became so horrendous she actually walked away from her online presence – has elevated the conversation about what developers of social-media platforms should do to better protect its users to the mainstream, and sites like Nextdoor and Airbnb have begun to take a more serious look at online discrimination.
Strengthening the spotlight on the issue is the recent presidential election, with President-elect Donald Trump famously taking to Twitter to make questionable statements, and millions experiencing harassment from trolls emboldened by his win. In a recent report from Wired, it’s explained that Twitter will begin implementing stronger filtering systems to alleviate the frequency of offensive tweets, a measure that is being welcomed by many. Specifically, “users will be able filter out certain keywords, phrases, user names, and hashtags in their mentions,” effectively silencing tweets that could trigger a negative reaction in the recipient. For example, people may choose to mute the word “faggot,” so as to avoid seeing malicious tweets from trolls arming themselves with homophobic vitriol. Entire threads will also be mutable, so Twitter users will have the opportunity to remove themselves from entire conversations, should they choose to do so.
As Wired explains, this development is a direct response to Twitter’s past failures in effectively curbing abuse on its platforms. “Up until now, dealing with harassment has been a largely reactive process. You can report abuse, but Twitter offered few tools to avoid seeing harassing tweets in the first place,” writes Klint Finley. A whack-a-mole effect has made reporting abuse in the old Twitter pretty ineffective; “in the instance of coordinated harassment campaigns like the one faced by Jones, new accounts constantly spring up to take the place of those that Twitter has blocked.” But the newest filtering options do give users more control of what they do and don’t see on their feeds, with “gendered or ethnic slurs” being far more easily muted than in the past. “Instead of having to block or mute each account that sends you abuse after the fact, you could preemptively block tweets that contain keywords or hashtags frequently used by harassers,” Finley explains.
Is that enough? “The question is whether even Twitter can really stem an onslaught that has grown to global proportions,” Finley writes in closing.