September 20, 2024

Tech Companies Join Forces to Combat AI-Generated Deepfakes in the 2024 Elections

3 min read

The digital age has brought about numerous advancements in technology, some of which have the potential to significantly impact the democratic process. One such technology is Artificial Intelligence (AI) and its ability to generate deepfakes – manipulated media that can deceive voters and potentially sway elections. In response to this threat, a coalition of 20 tech companies has joined forces to combat AI-generated deepfakes in the upcoming 2024 elections.

The agreement, known as the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” includes companies that create and distribute AI models, as well as social media platforms where deepfakes are most likely to appear. The signees are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X (formerly Twitter).

The accord aims to deploy technology to mitigate risks related to deceptive AI election content, assess models in scope of the accord to understand the risks they may present, seek to detect the distribution of this content on their platforms, and appropriately address it when detected. The signees also plan to foster cross-industry resilience to deceptive AI election content, provide transparency to the public regarding how the company addresses it, and continue to engage with a diverse set of global civil society organizations, academics, and governments around the world to help safeguard elections from deceptive AI use.

The accord applies to AI-generated audio, video, and images and addresses content that deceptively fakes or alters the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election or provides false information to voters about when, where, and how they can vote.

The signees have committed to working together to create and share tools to detect and address the online distribution of deepfakes. They also plan to drive educational campaigns and provide transparency to users. OpenAI, one of the signees, has already announced plans to suppress election-related misinformation worldwide and prevent chatbots from impersonating candidates.

However, some critics argue that the agreement’s vague language and lack of binding enforcement call into question whether it goes far enough to combat the threat of AI-generated deepfakes in the 2024 elections. Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center, told The Associated Press that while the companies have a vested interest in their tools not being used to undermine free and fair elections, it is voluntary, and they will be keeping an eye on whether they follow through.

The use of AI-generated deepfakes in elections is not a new phenomenon. In the 2023 US Presidential Election, both the Republican National Committee and the campaign for Ron DeSantis used AI-generated images of their opponents. In January 2024, New Hampshire voters were targeted with an AI-generated deepfake robocall urging them not to vote.

Regulatory efforts to combat AI-generated deepfakes have been slow to materialize. The Federal Communications Commission (FCC) acted quickly to ban AI-generated robocalls, but the US Congress has yet to pass any AI legislation. The European Union, however, agreed on an expansive AI Act safety development bill in December 2023, which could influence other nations’ regulatory efforts.

In conclusion, the commitment of 20 tech companies to combat AI-generated deepfakes in the 2024 elections is a promising step towards protecting the democratic process from manipulation. However, the voluntary nature of the agreement and the lack of binding enforcement raise concerns about its effectiveness. It remains to be seen whether the tech industry can deliver on its promise to safeguard elections from deceptive AI use.

As society continues to embrace the benefits of AI, it is essential that we ensure these tools do not become weaponized in elections. The Tech Accord to Combat Deceptive Use of AI in 2024 Elections is a step in the right direction, but more needs to be done to prevent the potential misuse of AI in the democratic process. The responsibility to protect the integrity of elections lies not only with the tech industry but also with governments, civil society organizations, and individuals. We must all work together to ensure that AI is used to enhance democracy, not undermine it.

Copyright © All rights reserved. | Newsphere by AF themes.