06/30/2020 | Press release | Distributed by Public on 06/30/2020 14:01
WASHINGTON, D.C. - U.S. Senators Bob Menendez (D-N.J.), Mark R. Warner (D-Va.) and Mazie K. Hirono (D-Hawaii) today blasted Facebook's inaction to prevent white supremacist groups and its role in providing such groups with the organizational infrastructure and reach needed to expand using its platform. In a letter to CEO Mark Zuckerberg, the senators criticized Facebook for being unable or unwilling to enforce its own Community Standards and purge white supremacist and other violent extremist content from the site.
Citing reports by the Tech Transparency project and other numerous investigative reports, the Senators highlighted ways in which right-wing extremist groups have used Facebook as a recruitment and organizational tool. They also underscored Facebook's own contributions to the public safety problem, which include autogenerating pages for white supremacist organizations, promoting white supremacist pages and even directing users who visit these groups to other extremist or far-right content.
Sen. Menendez and his colleagues cited a number of instances where online radicalization facilitated by Facebook led to real life consequences, such as when three members of a 'boogaloo' group on Facebook plotted to bring Molotov cocktails to a Black Lives Matter Protest, or when Air Force Staff Sergeant Steven Carrillo used Facebook to talk about committing violent acts and to meet the individual who eventually drove his getaway van after Carrillo shot and killed a federal security officer.
The senators also called on Zuckerberg to answer a series of questions regarding Facebook's policies and procedures against hate speech, violence, white supremacy and the amplification of extremist content.
Dear Mr. Zuckerberg:
We write to express our serious concerns about Facebook's lack of action to prevent white supremacist groups from using the platform as a recruitment and organizational tool. The United States is going through a long-overdue examination of the systemic racism prevalent in our society. Americans of all races, ages, and backgrounds have bravely taken to the streets to demand equal justice for all. While Facebook has attempted to publicly align itself with this movement, its failure to address the hate spreading on its platform reveals significant gaps between Facebook's professed commitment to racial justice and the company's actions and business interests.
On April 22, a full month before Americans started recent protests for racial justice, the Tech Transparency Project issued a report detailing the ways right-wing extremist groups were using Facebook to plan a militant uprising in the United States in response to stay-at-home orders issued to cope with the coronavirus pandemic. The organization's research uncovered '125 Facebook groups devoted to the 'boogaloo,'' a term with ties to white supremacist movements used to describe a coming civil war. Many of the groups' posts were explicit in their calls for violence, including discussions of 'tactical strategies, combat medicine, and various types of weapons, including how to develop explosives and the merits of using flame throwers.' The groups experienced unchecked growth in the months leading up to the report and remained on Facebook at least as of early June, despite Facebook's prior claims that it was 'studying trends around [boogaloo] and related terms on Facebook and Instagram' and that it 'do[es]n't allow speech used to incite hate or violence, and will remove any content that violates our policies.'
A subsequent report issued on May 21 provided further detail regarding the extent of Facebook's white-supremacist problem-and Facebook's lack of attention to this public safety problem. The Tech Transparency Project found that 113 of the 221 white supremacist organizations designated as hate groups by the Southern Poverty Law Center and the Anti-Defamation League-a staggering 51%-have a presence on Facebook. Many of the organizations' pages were actually auto-generated by Facebook after a Facebook user identified a white supremacist or neo-Nazi organization as his or her employer. Perhaps more troubling, Facebook actively promoted these and other white supremacist sites. According to the Tech Transparency Project, 'Facebook's 'Related Pages' feature often directed users visiting white supremacist Pages to other extremist or far-right content, raising concerns that the platform is contributing to radicalization.'
The Tech Transparency Project report echoes similar findings by the Southern Poverty Law Center (which has also tracked how these groups spread dangerous misinformation about COVID-19 on Facebook), along with several investigative news reports. One investigative report even concluded that Facebook served as a key recruitment tool for right-wing militia groups to recruit police officers to their movements. Facebook is hardly a passive actor in this context: a recent exposé by The Wall Street Journal revealed that Facebook's own researchers had found that '64% of all extremist group joins are due to our recommendation tools.' The report concluded that Facebook senior executives shut down efforts to reform the platform's tendency to amplify hyperpolarized and extremist content after Vice President of Global Public Policy Joel Kaplan deemed the efforts 'paternalistic.'
This evidence stands in marked contrast to Facebook's professed commitment to combat extremism by redirecting users who search for terms associated with white supremacy or hate groups to the page for 'Life After Hate,' an organization that promotes tolerance. The Tech Transparency Project found that Facebook directed users to the 'Life After Hate' page in only six percent of the searches for white supremacist organizations.
Unfortunately, the online radicalization facilitated by Facebook can lead to deadly consequences. On June 16, federal authorities charged Air Force Staff Sergeant Steven Carrillo with the June shooting death of a federal security officer outside a courthouse in Oakland. Authorities also charged the driver of the getaway van, Robert Alvin Justus. Justus and Carrillo had met on Facebook. According to the criminal complaint against Carrillo, a search of Carrillo's Facebook account revealed not only communications with Justus, but instances where Carrillo expressed his intention to commit violent acts.
In another instance in early June, federal authorities arrested three men on charges that they planned to bring Molotov cocktails to a Black Lives Matter protest. All three were members of a boogaloo group on Facebook. According to the Tech Transparency Project, one of the men arrested was a member of two private boogaloo groups identified in Tech Transparency Project's April 22 report. Following reporting in the Huffington Post and other media outlets, a Facebook representative told Huffington Post on April 23, '[w]e've removed groups and Pages who've used [boogaloo] and related terms for violating our policies.' Yet, according to the complaint, the three men used a different online group-a Nevada boogaloo Facebook group-to facilitate organizing a planned attack on the march.
The prevalence of white supremacist and other extremist content on Facebook-and the ways in which these groups have been able to use the platform as organizing infrastructure-is unacceptable. Facebook's Community Standards expressly state: 'We do not allow hate speech on Facebook.' In a March 27, 2019 post, Facebook made clear that this prohibition 'has always included white supremacy.' At that same time, Facebook expanded its prohibition to include 'praise, support and representation of white nationalism and white separatism.' And the Community Standards purport to prohibit 'organizations and individuals that proclaim a violent mission,' including 'organized hate' groups.
In light of these clear policies-and others against 'Violence and Incitement' and 'Dangerous Individuals and Organizations'-we are concerned Facebook is unable (or unwilling) to enforce its own Community Standards and rid itself of white supremacist and other extremist content.
We request that you to answer the following questions by July 10, 2020:
1. Does Facebook affirm its policy against hate speech and will it seriously enforce this policy?
2. What procedures has Facebook put in place to identify and remove hate speech from its platform? To what degree do these procedures differ with respect to public Facebook pages and private groups?
3. Does Facebook affirm its policy against violence and incitement and will it seriously enforce this policy?
4. What procedures has Facebook put in place to identify and remove violence and incitement from its platform? To what degree do these procedures differ with respect to public Facebook pages and private groups?
5. Does Facebook affirm its commitment to ban 'praise, support and representation of white nationalism and white separatism on Facebook and Instagram' as detailed in the company's May 27, 2019 post and will it seriously enforce this commitment?
6. What steps has Facebook implemented since announcing this policy to remove 'praise, support and representation of white nationalism and white separatism on Facebook and Instagram?'
7. Please provide our offices with any Facebook internal research concerning the platform's amplification of extremist groups.
8. How often are you personally briefed on the status of domestic extremist and white supremacist groups on Facebook and the platform's efforts to address these groups?
9. Who is the senior-most Facebook official responsible for addressing white supremacist groups' activity on Facebook and which Facebook executive does this employee report directly to?
10. What role did Vice President of Global Public Policy Joel Kaplan play in Facebook's decision to shut down and de-prioritize internal efforts to contain extremist and hyperpolarizing activity on Facebook?
11. What role did Mr. Kaplan play in the participation of the Daily Caller, an outlet with longstanding ties to white nationalist groups, in Facebook's fact-checking program?
12. When violent extremist groups actively and openly use a platform's tools to coordinate violence, should federal law continue to protect the platform from civil liability for its role in facilitating that activity?
Thank you in advance for your attention to this critical matter.