French Muslim group sues Facebook, YouTube over NZ massacre video

French Muslim group sues Facebook, YouTube over NZ massacre video
A white supremacist who killed 50 people in attacks on two mosques in Christchurch on March 15, live-streamed one of the attacks on Facebook.
4 min read
26 March, 2019
A white supremacist killed 50 people in attacks on March 15 [AFP]
A French Muslim group said on Monday it was suing Facebook and YouTube for allowing a live broadcast of the grisly video of a massacre at a New Zealand mosque, reviving the debate over how social media amplifies such attacks.

The French Council of the Muslim Faith (CFCM) was taking the legal action against the French branches of the two tech giants for "broadcasting a message with violent content abetting terrorism, or of a nature likely to seriously violate human dignity and liable to be seen by a minor," according to the complaint filed with prosecutors, a copy of which was seen by AFP.

In France such acts can be punished by three years' imprisonment and a fine of 75,000 euros ($85,000).

A white supremacist who killed 50 people in attacks on two mosques in Christchurch on March 15, live-streamed one of the attacks on Facebook, from where it was uploaded to other video sites, including YouTube.

Facebook France told AFP by e-mail that it would study the suit, adding that "acts of terror and hatred speech have no place on Facebook, and our thoughts go out to the victim's families and the entire community affected by this tragedy.

"We have taken many measures to make this video disappear from our platform, we are cooperating with authorities and our teams remain totally mobilised," it added.

We have taken many measures to make this video disappear from our platform, we are cooperating with authorities and our teams remain totally mobilised
- Facebook France


17 minutes

The livestream lasting 17 minutes was shared extensively on a variety of internet platforms and uploaded again nearly as fast as it could be taken down.

The CFCM, which represents several million Muslims in France, said it took Facebook 29 minutes from the start of the live broadcast to take the video down.

Facebook said it acted "quickly" to clamp down on the offensive images, and that it removed 1.5 million videos of the shooting in the first 24 hours after it happened, with much of this blocked by software at the moment of upload.

Major internet platforms have pledged to crack down on the sharing of violent images and other inappropriate content through automated systems and human monitoring, but critics say this was not working.

Internet platforms have cooperated to develop technology that filters child pornography, but have found it much more difficult to join forces to block violent content.

"AI systems are based on 'training data', which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video," Guy Rose, vice-president of Facebook said on Wednesday.

"This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems.

"However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare," Rosen said.

Facebook said it acted 'quickly' to clamp down on the offensive images, and that it removed 1.5 million videos of the shooting in the first 24 hours after it happened

'Going viral'

"This was a tragedy that was almost designed for the purpose of going viral," Neal Mohan, YouTube's chief product officer told the Washington Post.

"We've made progress, but that doesn’t mean we don’t have a lot of work ahead of us, and this incident has shown that, especially in the case of more viral videos like this one, there’s more work to be done," he added.

In New Zealand, where Prime Minister Jacinda Ardern has called for a global response to the dangers of social media, a 44-year-old man has been charged with sharing the livestream video shot by the accused gunman, 28-year-old Australian Brenton Tarrant, during his killing spree.

In the United States, a congressional panel last week called on top executives from Facebook and YouTube, as well as Microsoft and Twitter, to explain the online proliferation of the video.

The House Committee on Homeland Security said it was "critically important" to filter the kind of violent images seen in the video.

Follow us on Twitter: @The_NewArab