After a Tragedy Was Livestreamed, Defining Hate Speech Is Getting Tricky (Audio)
On July 6, Philando Castile lost his life after being shot to death by a police officer in Falcon Heights, Minnesota. His girlfriend, Diamond Reynolds, was his front-seat passenger and in what is being acknowledged as a formidable act of strength and foresight, she took out her phone and livestreamed the immediate aftermath and broadcasted it to her social-media followers. That act has brought into sharp focus the intersection of online communication and censorship, with some calling for stricter guidelines on what kind of content should be permissible for livestreaming while others are championing the technology, saying it allows for unadulterated glimpses into important events in real time.
Reynolds’ act and the reactions of millions across platforms like Twitter, Facebook, and Instagram are also bringing the notion of hate speech into sharp focus, and in a recent report from NPR, the many-pronged nature of the issue is examined. Facebook in particular is becoming a central figure in shaping today’s digital landscape and as Aarti Shahani reports, the site has been experiencing a considerable increase in the amount of content that is being flagged as inappropriate or hateful in the days following the shootings of Castile, Alton Sterling, and others. “[W]ith users calling out each other’s posts as racist, violent and offensive,” Shahani writes, “[Facebook] is having a very hard time deciding who is right or how to define hate speech.”
Killer Mike Is Taking The Fight To The Supreme Court To Protect Freedom Of Speech In Rap Music
Despite Facebook CEO Mark Zuckerberg’s public statement addressing Reynolds’ use of his platform – in which he mourned the loss of life and expressed how painful the images were to see and seemingly suggested the site stands in solidarity with the Black Lives Matter movement – it seems that employees of the juggernaut company are dealing with a wave of fallout that is making it difficult to present a united front. Acording to Shahani, “Employees inside Facebook tell NPR the company is struggling internally” to respond to countless reports of inflammatory posts by users of the site. “The policy team has been working round-the-clock to respond to users,” she reports before sharing a disclaimer that Facebook “pays NPR and other news organizations to produce live videos for its site.”
Facebook Stands Up for Black Lives Matter with Powerful Statement From Mark Zuckerberg
In specific terms, this fallout means that the “pages” of certain entities and people are being unpublished by Facebook due to repeated complaints. However, it appears that more harm is being done than intended, and some are taking a vocal stance against what they view as unwarranted censorship. Robert Jones, a Black Facebook user and blogger whose “Son of Baldwin” page was unpublished by Facebook tells NPR that he received a notice alerting him to the fact that he had been reported for “repeated infractions” and that his page was deactivated for 30 days. These actions sprang out of a post Jones published following the murders of five police officers in Dallas, Texas which read in part:
“Dear Black Folks: Do NOT feel collectively responsible when an assailant is Black. White folks do NOT feel collectively responsible when an assailant is White. If White people get to be individuals and presumed collectively innocent, then Black people get to be individuals and presumed collectively innocent, too.”
Jones’ page has since been reinstated, but his story is emblematic of a trend taking place elsewhere on Facebook. “Another controversial Facebook post included a graphic drawing of a person in a black hood slitting the throat of a white police officer,” writes Shahani. It was reported by many Facebook users, one of whom tells NPR “If it was the other way around, if I was posting a picture of me cutting a black guy’s throat, you don’t think they would throw a fit?” Facebook responded to the image’s being flagged as hateful by saying that “[t]he image does not violate community standards” but it was indeed eventually taken down. The same Facebook user tells NPR “The next day, five cops got killed. Then it got taken down.”
These two experiences are just that – the experiences of two individuals. However, there is little doubt that millions of Facebook uses have likely seen something they found offensive being shared on the site. In an increasingly globalized world, it is becoming increasingly difficult to remain isolated from opposing viewpoints and while most would argue that is a positive thing, we are likely to see much more gray area as the Social Media Age continues to flourish. Some of the intricacies of what is and is not considered hate speech on Facebook involves cut-and-dry context, and even then the factor of subjectivity can cloud things even further. Facebook’s head of policy, Monica Bickert told NPR “[w]e look at a how a specific person shared a specific post or word or photo to Facebook” to discern whether or not something is defined as hate speech. Shahani explains Bickert’s statement by writing “[m]eaning, if one person shared the cop killer cartoon to condemn it, that’s OK. If another person shared it as a call to arms, it’s not.”
Anonymous Released Names of Alleged KKK Members. Are They Right & Who Should Be Next? (Video)
According to Facebook’s community standards, the site actively removes hate speech which targets people based on race, ethnicity, national origin, religious affiliation, sexual orientation, gender, gender identity, serious disabilities, and diseases. However, most of the onus in getting such hate speech taken down lies with the Facebook user, not the employees themselves. “As with all of our standards, we rely on our community to report this content to us,” Facebook explains. The site also provides users with instructions on how to report hate speech, which readers can view here. Back in January, Facebook launched an initiative called Online Civil Courage which has “the goal combatting online extremism and hate speech.”