November 14, 2024

The Imperative of Labeling Fake Videos on Meta: A Perspective from the Oversight Board

3 min read

The digital landscape has been transformed by the advent of social media platforms, with Meta, formerly known as Facebook, leading the charge. Meta’s influence extends far beyond simple social interaction, as it has become a significant player in the dissemination of news, information, and entertainment. However, with this power comes responsibility, particularly in the realm of content moderation. The Oversight Board, an independent body that reviews Meta’s decisions regarding online content, has recently weighed in on the issue of fake videos and the need for labeling rather than removal.

On February 5, 2024, the Oversight Board released a statement regarding Meta’s handling of a fake video involving US President Joe Biden. The video, which edited existing footage to make it appear as though the President was touching his granddaughter inappropriately, did not violate Meta’s manipulated media policy. The policy, which focuses on videos manipulated using artificial intelligence, allows content that depicts individuals doing things they did not do to remain on the platform. Michael McConnell, co-chair of the Oversight Board, expressed concern over this policy, stating that it “makes little sense” and lets other forms of fake content off the hook.

The Oversight Board’s recommendation for labeling fake videos is rooted in several concerns. First and foremost, labeling can reduce reliance on third-party fact-checkers, offering a more scalable way to enforce Meta’s manipulated media policy. Additionally, labeling informs users about the authenticity of the content they are consuming, allowing them to make informed decisions. The board also expressed concern about users potentially not being informed if or why content had been demoted or removed and how to appeal any such decisions.

In 2021, Meta’s Oversight Board heard over a million appeals over posts removed from Facebook and Instagram. The volume of misleading content is rising, and the quality of tools to create it is rapidly improving. One of the most potent forms of electoral disinformation is audio deep fakes, which can clone or manipulate someone’s voice to suggest they said things they have not. In January 2024, a fake robocall claiming to be from President Biden, believed to be artificially generated, urged voters to skip a primary election in New Hampshire. The Oversight Board acknowledged the importance of addressing these “cheap fakes” alongside AI-generated or altered material.

Sam Gregory, executive director of human rights organization Witness, echoed the Oversight Board’s sentiments, emphasizing the need for an adaptive policy that addresses both AI-generated or altered material and “cheap fakes.” However, he cautioned against overly restrictive policies that risk removing satirical or AI-altered content which is not designed to be misleading.

The Oversight Board’s evaluation of manipulated media is based on whether it would “mislead an average person.” In the case of the President Biden video, it was obvious that the clip had been altered and was unlikely to mislead average users. The board also acknowledged the importance of keeping the policy dynamic as AI and usage gets more pervasive or more deceptive, or people get more accustomed to it.

Focusing on labeling fake posts is an effective solution for some content, particularly videos that have been recycled or recirculated from a previous event. However, the effectiveness of automatically labeling content manipulated using emerging AI tools is a subject of debate. Explaining manipulation requires contextual knowledge, and countries in the Global Majority world will be disadvantaged both by poor-quality automated labeling of content and lack of resources for trust and safety and content moderation teams and independent journalism and fact-checking.

In conclusion, the Oversight Board’s recommendation for labeling fake videos on Meta is a crucial step towards addressing the growing issue of misleading content on the platform. This approach offers a more scalable way to enforce Meta’s manipulated media policy, informs users about the authenticity of the content they are consuming, and reduces reliance on third-party fact-checkers. However, it is essential to strike a balance between effective labeling and the preservation of satirical or AI-altered content that is not designed to be misleading. The digital landscape is constantly evolving, and Meta, along with other social media platforms, must adapt to keep pace with the latest trends and technologies in content manipulation.

Copyright © All rights reserved. | Newsphere by AF themes.