Some experts say that technology companies should use technology that already addresses child pornography and copyright infringement in order to speed up the spread of this type of video.

The 17-minute livestream video, which was not verified by CNN, seems to have been filmed by one of the shooters when they went to a mosque and opened the fire.
Facebook (FB) says it shut down the livestream "quickly," but hours later, it uploaded more uploads to the site. Twitter blocked the original account and tried to remove other versions on the platform. YouTube said it uses "technology and people" to remove content that violates its policies.

Technologists say that digital hashing, which has existed for more than a decade, could be better used to prevent re-uploading video. Hashing would not have been able to catch the original live video of the attacks, but it could prevent the distribution of newly uploaded copies.

Facebook says it monitors its platform, but there has been no livestream of a massacre. Why?

"The video is still online," said David Ibsen, CEO of the Counter-Extremism Project, an organization that maintains a hash database of terrorist videos. "The technology to prevent this is available, and social media companies have decided not to invest in the acquisition."

YouTube told CNN Business that it uses hash technology to prevent uploads of the already-removed New Zealand massacre videos, but not necessarily for those who show part of the original. Instead, they rely on "automated tagging systems and user flags" to stop the spread of these clips.

Twitter (twtr) declined to go into his approach to hashing.

In a statement, Facebook said: "We add every video we want to find to an internal database [hashing]allows us to detect and automatically remove copies of videos after re-uploading. "

The company said it had removed the video from Facebook Live and hashed it so that other videos that are visually similar are automatically removed from Facebook and Instagram. Facebook has not commented on why some parts of the original video are still active hours later.

According to Hany Farid, a computer science professor at Dartmouth College who has used hashing to combat child pornography when Facebook uses "robust" hashing – a method that can detect changes in uploads – it should find the majority of repetitions. In addition, any variations that fall through the cracks can then be hashed and added to the same database to prevent further uploads.

Social platforms such as Facebook are increasingly relying on artificial intelligence to identify violent content. However, the process may be unreliable because of many factors, such as the amount of content uploaded daily and AI's inability to understand subtleties, such as the context in which an event occurs.

Facebook, Google and Twitter are currently using hashing to combat illegal material such as child pornography, copyright infringement and videos that violate its Terms of Use (such as extremist content). According to a YouTube spokesperson, using hash technology for videos that may appear in a legitimate context, such as a news clip, may be too much of a burden on human content moderators.

"Hashing is extremely effective in preventing the uploading of content that is illegal, in whatever context, such as child sexual abuse," the spokesperson said in a statement. "Context is critical to key news events, and documented uploads may be allowed on YouTube."

The company added that hashing could tag videos that use original material in the right context, such as a news video.

How does hashing work?

Video hashing works by breaking a video into keyframes and assigning each one a unique alphanumeric signature or hash. This hash is collected in a central database where every video or photo uploaded to a platform is compared to that record.

The system requires an image database and does not use artificial intelligence to detect what is in an image – it only identifies a match between images and video.

"Hashing has the advantage that it works on a scale," said Farid.

Facebook, YouTube and Twitter are having trouble getting into New Zealand videos
In 2008, Farid worked with Microsoft to develop PhotoDNA, a system that quickly and extensively identifies child pornography. PhotoDNA is currently used by most major technology platforms.

"It's extremely fast," he said. "You can do billions of uploads per day."

According to Farid, every image that users have uploaded to Facebook over the last 10 years has been scanned against a well-known child sexual abuse database.

Platforms also use hashing to monitor videos for copyright infringement. If you try to upload a copy of the Avengers movie on YouTube, you will not get very far thanks to hashing.

Technology platforms have shown increased interest in hashish over the years to stop the spread of terrorist videos. After years of plaguing the platforms, Facebook, Youtube, Twitter, and Microsoft launched the Global Counter-Terrorism Forum, an organization that maintains a database of known terrorist extremism hashes.

Tech companies are reluctant to implement

Although hashing has been used by tech companies for many years, Facebook and Google are of the opinion that moderation of content is primarily in artificial intelligence.

"If we move on for five or ten years, I think we'll have more AI technology that can do that in more areas," said Mark Zuckerberg in his statement to the Senate Committee on Trade and Justice in April 2018.

But for Farid, that answer is not enough: "I do not know what you've been doing in the last five or ten years, I think we're living with it?"

"It has turned out to be working in child abuse, copyright infringement and now extremism, so there are no excuses," said Farid. "You can not pretend that you have no technology, and the decision not to do so is a matter of will and politics – not technology."

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.