Facebook says the footage was successfully reuploaded to its platform 300,000 times in the first 24 hours after the attack.

No one reported the live-stream of the New Zealand terror attacks until the video had ended, Facebook has claimed.

The social network said fewer than 200 people watched the footage during the live broadcast – adding that it was seen about 4,000 times in total before it was taken down.

According to the tech giant, the video was first reported to moderators 29 minutes after the stream began, and 12 minutes after the live feed ended.

Facebook’s revelations have raised questions about the company’s policy of community moderation.

Brenton Tarrant, charged for murder in relation to the mosque attacks, is seen in the dock during his appearance in the Christchurch District Court, New Zealand March 16, 2019.
Image:Brenton Tarrant has been charged for murder in relation to the mosque attacks

The delay in taking down the footage meant that it had already begun to circulate on other online platforms.

Before Facebook was alerted to the footage, the company said “a user on 8chan posted a link to a copy of the video on a file-sharing site”.

The company added that it had removed 1.5 million videos of the attack from its platform in the 24 hours after the deadly shootings – with 1.2 million of them blocked at the point of upload.

This means the violent footage was successfully reuploaded to the platform at least 300,000 times.

In a blog post by the company’s deputy general counsel, Chris Sonderby, the social media giant announced that it was attempting to use a range of technologies to detect when the video or similar videos were being uploaded.

However, on video platform YouTube, clips celebrating the New Zealand mosque shootings are easily avoiding the platform’s moderation efforts, despite a general clampdown across social media platforms.

Yesterday, copycat videos made in support of the killings, including one which recreated the attack in the children’s game Minecraft alongside others that splice the attacker’s comments into other videos.

The technology to automate the detection of particular files can struggle to identify when video and image files have been modified in even a minor fashion.

The development of technology to address the spread of terrorist propaganda is something which online platforms have been investing in for years, but Home Secretary Sajid Javid has said social media companies “really need to do more to stop violent extremism being promoted on [their] platforms”.

8chan, also called Infinitechan or Infinitychan, is an American imageboard website composed of user-created boards

Experts have warned that smaller and fringe platforms are being used to coordinate the spread of this material on the mainstream sites.

Jacob Davey, a researcher with the Institute for Strategic Dialogue, noted that the copy of the video shared on 8chan was significant to its spread across the web.He explained: “8chan is a platform which is intimately connected to the global alt-right movement, as well as being one of the engine rooms of internet culture.

“By broadcasting to 8chan the attacker was ensuring that he was reaching an audience with whom his messaging would resonate the most.”This shows a deep and dark awareness of the powerful subculture that the extreme right has built, which revolves around platforms such as 8chan.”

Staff