Some experts say that technology companies should use technology that already addresses child pornography and copyright infringement in order to speed up the spread of this type of video.
Technologists say that digital hashing, which has existed for more than a decade, could be better used to prevent re-uploading video. Hashing would not have been able to catch the original live video of the attacks, but it could prevent the distribution of newly uploaded copies.
"The video is still online," said David Ibsen, CEO of the Counter-Extremism Project, an organization that maintains a hash database of terrorist videos. "The technology to prevent this is available, and social media companies have decided not to invest in the acquisition."
YouTube told CNN Business that it uses hash technology to prevent uploads of the already-removed New Zealand massacre videos, but not necessarily for those who show part of the original. Instead, they rely on "automated tagging systems and user flags" to stop the spread of these clips.
In a statement, Facebook said: "We add every video we want to find to an internal database [hashing]allows us to detect and automatically remove copies of videos after re-uploading. "
The company said it had removed the video from Facebook Live and hashed it so that other videos that are visually similar are automatically removed from Facebook and Instagram. Facebook has not commented on why some parts of the original video are still active hours later.
According to Hany Farid, a computer science professor at Dartmouth College who has used hashing to combat child pornography when Facebook uses "robust" hashing – a method that can detect changes in uploads – it should find the majority of repetitions. In addition, any variations that fall through the cracks can then be hashed and added to the same database to prevent further uploads.
"Hashing is extremely effective in preventing the uploading of content that is illegal, in whatever context, such as child sexual abuse," the spokesperson said in a statement. "Context is critical to key news events, and documented uploads may be allowed on YouTube."
The company added that hashing could tag videos that use original material in the right context, such as a news video.
How does hashing work?
Video hashing works by breaking a video into keyframes and assigning each one a unique alphanumeric signature or hash. This hash is collected in a central database where every video or photo uploaded to a platform is compared to that record.
The system requires an image database and does not use artificial intelligence to detect what is in an image – it only identifies a match between images and video.
"Hashing has the advantage that it works on a scale," said Farid.
"It's extremely fast," he said. "You can do billions of uploads per day."
According to Farid, every image that users have uploaded to Facebook over the last 10 years has been scanned against a well-known child sexual abuse database.
Platforms also use hashing to monitor videos for copyright infringement. If you try to upload a copy of the Avengers movie on YouTube, you will not get very far thanks to hashing.
Tech companies are reluctant to implement
Although hashing has been used by tech companies for many years, Facebook and Google are of the opinion that moderation of content is primarily in artificial intelligence.
"If we move on for five or ten years, I think we'll have more AI technology that can do that in more areas," said Mark Zuckerberg in his statement to the Senate Committee on Trade and Justice in April 2018.
But for Farid, that answer is not enough: "I do not know what you've been doing in the last five or ten years, I think we're living with it?"
"It has turned out to be working in child abuse, copyright infringement and now extremism, so there are no excuses," said Farid. "You can not pretend that you have no technology, and the decision not to do so is a matter of will and politics – not technology."