Plato Data Intelligence.
Vertical Search & Ai.

Big Tech promises to counter 2024 election misinformation

Date:

Twenty prominent names in tech have signed an accord outlining their intentions to mitigate the use of their platforms to create or distribute AI-bolstered misinformation affecting elections, days after the world’s fourth- and fifth-most populous nations – Indonesia and Pakistan – went to the polls.

Signatories to “The Tech Accord to Combat Deceptive Use of AI in 2024 Elections” include Adobe, Amazon, Arm, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, OpenAI, Stability AI, TikTok, Trend Micro, and X.

Accord members unveiled their effort on February 16 by isusing a statement [PDF] about how the members developed “a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.”

The group hopes to combat “AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election.” False information about when, where and how to vote is also in the group’s sights.

The crew acknowledged that this year a record number of people will go to the polls – more than four billion people in over 40 countries.

They don’t mention that half a billion have already gone to the polls this year. Indonesia and Pakistan, with populations of over 270 million and 230 million respectively, voted on February 8 and 14. Indonesia’s government warned of deepfakes influencing its election, while in Pakistan former prime minister Imran Khan campaigned using deepfakes – he was jailed before the election and could not appear in the flesh.

Among the other 3.5 billion eligible to vote this year are many of India’s 1.4 billion citizens, the US’s 330-million-plus, and the UK’s 65 million. Those three nations are far bigger customers for big tech’s wares than Indonesia or Pakistan.

Accord signatories adopted a core set of related principles – like tracking the origin of election-related content and supporting public awareness campaigns. They also pledged to “work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps.”

Other commitments include a promise to assess their products to understand what kind of risks they pose and detect distribution of content on their platforms.

Specific suggestions include developing classifiers like watermark or signed metadata to certify the authenticity of content, allowing users to mark content as AI generated or ingesting open standards based identifiers created by AI production companies in order to detect it, and providing transparency to the public on policies.

While those seem like concrete actions, the eight listed commitments overall are vague – like “seeking to appropriately address this content detected on their platforms,” and “fostering cross-industry resilience.”

The accord also appears to bill AI as a possible savior to the problem it created. It notes [PDF] that “AI also offers important opportunities for defenders looking to counter bad actors,” and offers rapid detection of deceptive campaigns as well as the ability to quickly take on multiple languages as ways it can help scale a defense.

However, the group did not commit to using AI in these capacities.

The authors of the Accord also offer no timeframe for delivery on any of the plan. The pledge goes no further than current policies for many of its members – and in many cases promises less.

OpenAI has already declared that building GPTs for political campaigning and lobbying was prohibited, as is using its models to power any chatbots that impersonate real people, businesses or governments, or meddle in any democratic processes.

Last year Meta decided to require advertisers to disclose AI-generated or digitally altered content – which frankly hasn’t kept misinformation off the social network.

So it could all be a lot of noise that achieves very little. Just like almost every other attempt to clean up social media. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?