Muslim Group Calls Out Facebook's Failure To Combat Hate After Christchurch
Facebook has failed to fix "extreme" Islamophobia after the Christchurch attack, a Muslim group claims, calling for the social media giant to work more closely with community groups to stamp out hate.
It comes as the social media giant says it has learned from its mistakes over the mosque massacre one year ago, and other calls for tech companies to do more, lest the internet is used for "extreme hate" again.
"I don't think Facebook has done enough. They have changed their policies but the implementation of those policies seems very inconsistent and unreliable," Rita Jabri-Maxwell, a lawyer with the Australian Muslim Advocacy Network (AMAN), told 10 daily.
"Facebook’s hate speech policy is not picking up some of the main far-right narratives."
It's nearly 12 months on from the New Zealand attacks, where Australian alleged terrorist Brenton Tarrant is accused of gunning down 51 people in two separate shootings, live-streaming his rampage on Facebook. The platform promised change -- but the AMAN says the social media network has fallen short of promises to stamp out extreme hatred online.
The AMAN has written a letter to Facebook Australia, shared with 10 daily, in which they claim the platform's reporting standards allow vile anti-Islam slurs and violent threats to slip through -- claiming that sends "a signal that such endangering speech is socially acceptable."
"The ambiguity of Facebook’s community standards is contributing to a climate of real insecurity for Australian Muslims," the letter said.
"There is evidence that Facebook is allowing extreme and bigotry-based views to be normalised... and emboldening ordinary people to commit public acts of hatred against Australian Muslims."
10 daily reported last week Instagram, owned by Facebook, was a new battleground for racist and far-right content.
The AMAN wants to work with Facebook, and hopes to meet and discuss how best to address hate and racism online.
Jabri-Maxwell -- who also works with the Islamophobia Register, logging reports of anti-Muslim hate -- said AMAN's research had found posts referring to Muslims as "bloody germs of humanity" or "parasites" were ruled by reporting tools to not breach Facebook standards.
"I haven't seen any changes [since Christchurch]. People are still posting outrageous violent things, we still have incident reports coming into the Islamophobia Register from innocent people going about their lives," she said.
Blocking 8Chan Just 'Baby Steps' In Fighting Racist Extremism, Experts Say
Blocking Australians from seeing abhorrent videos from mass violence events is just "baby steps" in dealing with online extremism, experts have warned, with the government urged to do more to prevent such incidents occurring.
Tarrant live-streamed his shooting rampage for 17 minutes on Facebook. The video was not taken down until half an hour after it began but was copied and reposted over one million times around the internet.
Following the massacre, Facebook committed to reviewing its live-stream policies, as well as joining with a number of other tech companies under the 'Christchurch Call To Action', to take stronger action against far-right hate groups and racism.
"It is right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence," a May 2019 statement from Facebook, Microsoft, Twitter, Google and Amazon read.
The Christchurch Call action plan committed the tech giants to stronger rules around hate speech, better tools to report and remove extreme content, and working with anti-extremist groups to "challenge hate and promote pluralism and respect online."
In a statement to 10 daily, Facebook Australia said it had strengthened reporting technology, expanded its deradicalisation initiatives, and was working closely with other companies "to respond quickly to mass violence."
In response to the letter, Facebook told 10 daily it will continue "to study trends in organized hate and hate speech with partners around the world and have banned over 200 white supremacist organizations."
"We know that some people will find new ways to communicate and spread harm both online and offline and we’re absolutely committed to doing everything we can to advance our work and share our progress,” a Facebook company spokesperson said.
10 daily understands four of the five main Facebook pages identified by the AMAN as hosting the offensive content have since been removed.
Dr Andre Oboler, CEO of the Online Hate Prevention Institute, said Facebook made "massive" improvements in dealing with live video and abhorrent content, as well as deleting extremist accounts.
However, he criticised Facebook for not engaging with outside researchers and civil society, to keep pace with how extremist groups evolve and change.
"The Christchurch Call is a nice sentiment but I'm not sure how much it will change reality," Oboler said.
"The involvement of civil society is the missing piece. It's not a matter of set and forget, it can never be. You need civil society at the table to keep up with changes."
Julie Inman Grant, Australia's eSafety Commissioner, said she wanted big tech companies to do more.
"The Christchurch attacks shifted the world on its axis, and sadly, it is likely only a matter of time until we see the internet once again used as a tool of extreme hate," she told 10 daily.
"We do think that large platforms can and should be doing more to address the potential for their technologies to be misused by terrorists and violent extremists."
She called for governments, including Australia, to implement stronger protections.
"Effective and targeted regulatory interventions, backed by strong sanctions, are critical to combatting the worst online harms perpetrated by those whose only language is violence," Inman Grant said.
In a statement, a Facebook Australia spokesperson told 10 daily the company had made major changes in the last year.
"We stand with New Zealand as we remember the people and families affected by the tragedy that took place on March 15. The New Zealand government has shown global leadership in bringing governments, industry and civil society together to combat hate and violent extremism," they said.
"Since March 15 and the Christchurch Call, we tightened our policies, strengthened our detection technology, expanded initiatives to redirect people from violent extremism, and improved our ability to work with other companies to respond quickly to mass violence.”
Facebook has also put stricter rules on who can use Facebook Live, redirecting people to anti-radicalisation groups if they search for extremist content, and continually updating standards and rules.
Elsewhere, Twitter earlier this week announced a partnership with New Zealand's University of Otago to study how online conversations can cause "digitally amplified polarisation", and how to battle "toxic divisions".
Twitter has also rolled out better tools to detect and remove extreme content, a 'zero-tolerance' policy for removing new accounts of people who have already been banned, and proactively removing terrorist content.
"We've been resolutely focused on the ways Twitter can tackle extremism while promoting dialogue and inter-community understanding," said Kara Hinesley, director of public policy for Twitter New Zealand and Australia.
Jabri-Maxwell said social media companies had talked a big game, but needed to do more to actually enforce rules.
"We want a safer country for everyone. we have to defend our community bonds because that's what keeps us safe. Social media companies need to uphold Australian values of tolerance, compassion and the fair go," she said.
"People can and should debate religion, immigration, globalisation, all these topics without spreading extremist narratives that incite hatred."