Meta is open sourcing its automated content moderation tool – Popular Science

Posted under Programming, Technology On By James Steward

The Hasher-Matcher-Actioner, explained.
|
Online content moderation is hard (as Elon Musk is currently finding out). But Meta—the company behind Facebook, Instagram, and WhatsApp—is hoping to make it easier for other platforms. Last week it announced that it would open up the source code for its Hasher-Matcher-Actioner (HMA) tool and make it freely available. This news comes as Meta is set to assume the chair of the Global Internet Forum to Counter Terrorism (GIFCT)’s Operating Board. 
Founded in 2017 by Facebook, Microsoft, Twitter, and YouTube, GIFCT has since evolved into a nonprofit organization that works with member companies, governments, and civil society organizations to tackle terrorist and violent extremist content on the internet. One aspect of this is maintaining a shared hash database of extremist content so that if one company, say Facebook, flags something as terrorist-related, other companies, like YouTube, would be able to automatically take it down.
In order for these databases to work efficiently (and so that no company has to store petabytes of horrifically violent content), they don’t store a complete copy of the offending content. Instead, they store a unique digital fingerprint, or hash. 
Here’s how hashes are made: In essence, a copy of the extremist video, terrorist photo, PDF manifesto, or anything else is fed through an algorithm that converts it to a unique string of digits and letters. You can’t recreate the content using the hash, but putting the same video through the algorithm will always yield the same result. As long as all the platforms are using the same algorithm to create the hashes, they can use a shared database to track terrorist content.
[Related: Antivaxxers use emojis to evade Facebook guidelines]
Meta’s HMA tool allows platforms to automate the process of hashing any image or video, matching it against a database, and taking action against it—like stopping the video from being posted, or blocking the account trying to do so. It isn’t limited to terrorist content, and can work with a shared database like the one maintained by GIFCT, or a proprietary one like YouTube’s Content ID
It’s worth pointing out that all this happens in the background, all the time. Once HMA or any other similar automated tool is up and running, all the photos and videos users post are hashed and checked against the relevant databases as they are being uploaded. If something is later flagged by moderators as violent, offensive, or otherwise warranting removal, it can go back and automatically remove the other instances that are live on the platform. It’s a continuous process that strives to keep objectionable content from being seen or spread.
While most big platforms already operate with some kind of automated content moderation, Meta hopes that its HMA tool will help smaller companies that lack the resources of the major platforms. “Many companies do not have the in-house technology capabilities to find and moderate violating content in high volumes,” explains Nick Clegg, former Deputy Prime Minister of the United Kingdom and now Meta’s President of Global Affairs, in the press release. And the greater the number of companies participating in the shared hash database, the better every company becomes at removing horrific content—especially as it is rarely just shared in a single place. “People will often move from one platform to another to share this content.”
Meta claims to have spend around $5 billion on safety and security last year and is committed to tackling terrorist content as “part of a wider approach to protecting users from harmful content on our services.” Clegg claims that “hate speech is now viewed two times for every 10,000 views of content on Facebook, down from 10-11 times per 10,000 views less than three years ago.”Without access to Facebook’s internal data we can’t verify that claim, and somereports seem to indicate that the company’s own system is far from perfect. However, initiatives like HMA and the Oversight Board at least give the impression that Meta is serious about solving the problem of content moderation in a fair and consistent manner—unlike Twitter.
Like science, tech, and DIY projects?
Sign up to receive Popular Science’s emails and get the highlights.
Articles may contain affiliate links which enable us to share in the revenue of any purchases made.
Registration on or use of this site constitutes acceptance of our Terms of Service.
© 2022 Recurrent. All rights reserved.

source

Note that any programming tips and code writing requires some knowledge of computer programming. Please, be careful if you do not know what you are doing…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.