Facebook Says AI Will Help You Identify Harmful Viral Content Faster

Facebook Says AI Will Help You Identify Harmful Viral Content Faster

Facebook says it is improving the way it moderates content on its platform by using artificial intelligence (AI). The social media giant, which has a content review team of around 15,000 reviewers who review content in more than 50 time zones, actively receives a significant number of user reports on objectionable content. However, as reviewing those reports is vital to building an effective social network, Facebook is now implementing machine learning. This helps prioritize informed content. Facebook is also pushing for copyright protection by allowing page managers to submit copyright requests.

Content moderation is a must for a massive platform like Facebook. But with thousands and millions of users posting content simultaneously, it is not an easy task to filter something that is not harmful or objectionable at first glance. The rise of hate speech and violent social media posts too making it difficult for human reviewers to end all inappropriate content. Therefore, Facebook wants to use its artificial intelligence and machine learning skills to speed up the filtering process.

Facebook initially relied on a chronological model to deal with content moderation. However, in time started making a shift towards AI and enabled the system to search and delete content automatically that is not suitable for the masses. That automation helped recognize duplicate reports from Facebook users, identify content such as nude and pornographic photos and videos, limit the circulation of spam, and prevent users from uploading violent content.

Now, Facebook wants to go beyond automation and use its machine learning algorithms to sort reported content based on priority to help utilize its human reviewers optimally.

“We want to make sure that we are getting to the worst of the worst, prioritizing imminent real-world damage above all else,” Ryan Barnes, a Facebook product manager who works with its community integrity team, told reporters during a press conference on Tuesday. .

Facebook is using its algorithms to intelligently classify user reports so that its human reviewers can review and filter all the content that computers cannot capture but is harmful to society. A key factor that the company is considering is the popularity of infringing content on the platform.

“We look for gravity, where there is real-world harm, like suicide or terrorism or child pornography, rather than spam, which is not that urgent,” Barnes said.

Additionally, Facebook is considering the likelihood of violation and looking for content that is similar to policies already violated. This would help prioritize areas where human reviews are important.

That being said, Facebook knows that artificial intelligence is not the perfect solution to all problems and cannot solely help moderate content on its platform.

“We’ve optimized AI to target the most viral and harmful posts, and we’ve given our humans more time to spend on the most important decisions,” said Chris Palow, a software engineer for Facebook’s Interaction Integrity team.

Facebook has also developed a local market context that helps understand specific market issues, including those that arise in India. This will allow machine learning algorithms to consider local context and help flag content that could affect a particular group of people, Palow explained.

In addition to the new changes to its content moderation, Facebook has Announced that you are expanding access to your Rights Manager to give all page managers of your platform and Instagram the ability to submit requests for copyright protection. This will allow more creators and brands to issue takedown requests for content reloaded on both Facebook and Instagram. The Rights Manager was piloted with some partners in September.

hashantagari

Leave a Reply

Your email address will not be published. Required fields are marked *