Plato Data Intelligence.
Vertical Search & Ai.

AI disinformation is hugely important but difficult to quash

Date:

Analysis Tackling AI disinformation is more crucial than ever for tech companies this year as they brace for the upcoming US presidential election.

Combating false information and deepfakes, however, has only become more difficult given that the tools for generating synthetic content more widely accessible than ever before.

OpenAI’s ChatGPT has grown increasingly capable since it was first released in November 2022. Last year, it upgraded its GPT-4 system to produce audio and images on top of text. Now, the startup has unleashed the GPT Store, a platform hosting custom-built ChatGPT-based bots that can adopt particular personalities or carry out specific tasks.

‘We’re still working to understand how effective our tools might be for personalized persuasion’

On Monday, OpenAI said that building GPTs for political campaigning and lobbying was prohibited, and using its models to power any chatbots that impersonate real people, businesses, or governments is an abuse of its technology. Any applications that meddle with democratic processes, such as voting, for example, are also banned. 

“We expect and aim for people to use our tools safely and responsibly, and elections are no different,” it said. “We work to anticipate and prevent relevant abuse—such as misleading ‘deepfakes,’ scaled influence operations, or chatbots impersonating candidates.”

Clarifying the rules is well and good, but enforcing them is another matter. It’s difficult to police the GPT Store, and OpenAI is relying on users to report applications that go against its policies. People have already broken its rules around not creating GPTs “dedicated to fostering romantic companionship or performing regulated activities.” The platform has plenty of AI “girlfriends,” and there are many political chatbots too. 

Some are crafted to mimic politicians like Donald Trump, for example, even though they aren’t convincing impersonations, while others support certain political ideologies. In a conversation with “The Real Trump Bot,” for example, it said it was against mail-in voting. “Oh, don’t even get me started on mail-in voting. It’s like they’re asking for trouble. It’s so open to abuse, it’s unbelievable.” 

Does that go against OpenAI’s policies? Could it dissuade someone from mailing their vote?. “We’re still working to understand how effective our tools might be for personalized persuasion,” OpenAI admitted.

OpenAI has taken other precautions too. Safety guardrails in place for its text-to-image model DALL-E prevent users from generating images of politicians to try and prevent deepfakes. But the guardrails do allow editing of images, meaning people can still manipulate photographs of candidates without an AI’s help.

On top of banning or blocking content, the other strategies companies are taking to combat disinformation include rolling out digital watermarks and requiring users to be transparent about AI-generated images or videos.

Microsoft is a member of the Coalition for Content Provenance and Authenticity (C2PA) and is supporting the Content Credentials initiative. It is rolling out a feature that embeds a watermark for content generated by the Bing Image Creator, containing metadata describing how the image was made, when, and by whom, which can be inspected to prove its inauthenticity. OpenAI is planning to follow suit and implement the CP2A‘s digital credentials too. But the watermarking and metadata feature isn’t foolproof. However, the information on an image’s provenance can only be seen in applications or websites that support the Content Credentials format.

Users can strip the information from an image generated by Microsoft or OpenAI’s tools and open it in something like Google Chrome, for example, which doesn’t support CP2A. There would be no metadata associated with their images then, and they would be free to distribute that version instead.

Google’s YouTube platform has taken a different approach and is instead requiring creators to disclose whether their videos contain AI-generated footage, depict fake events, or they contain people saying things they haven’t said. Meanwhile, Meta and Google have asked advertisers to declare if their ads contain synthetic content, whether its fake images, videos, or audio. 

The rules are stricter for politicians. Multiple states, like California, Texas, Michigan, Washington, and Minnesota have passed legislation prohibiting political candidates from spinning up deepfakes to influence voters. There isn’t a federal law, however. The Federal Election Commission has yet to decide whether its policy against “fraudulently misrepresenting other candidates or political parties” applies for AI-generated content.

AI is still mostly unregulated, and the various laws applicable around the world don’t cover all instances of political disinformation. Although tech companies are ramping up efforts to curb users from generating AI disinformation, their methods aren’t perfect and users often find ways to get around their safety measures. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?