Plato Data Intelligence.
Vertical Search & Ai.

AI Deepfake Detection Progresses Against Growing Deception

Date:

Though the topic of AI deepfake research is still in its infancy, expert Hao Li believes that the findings are encouraging.

Researchers in Abu Dhabi are working on technologies that can be vital in tackling deception efforts. This is as the world is dealing with increased content enhanced by artificial intelligence (AI) and deepfake videos.

The research is led by Hao Li, associate professor of computer vision at the Mohamed Bin Zayed University of Artificial Intelligence. He is also the director of the university’s Metaverse Lab.

According to Li, they have developed many technologies that have significantly advanced the detection and characterization of deepfakes. He also said it’s more than just about detecting; it’s also about knowing where it comes from and its intention.

Video Transformer for Deepfake Detection

The Mohamed Bin Zayed University of Artificial Intelligence was listed as an applicant on a US patent for a “video transformer for deepfake detection” in 2022. This would hypothetically consist of “a display device playing back the potential deepfake video and indicating whether the video is real or fake.”

Prof. Li said that it’s only one of many areas of MBZUAI research dedicated to the growing use of AI video implementation tools and AI content generation. He also added that it’s becoming increasingly difficult to create an undetectable deepfake.

Prof. Li referenced Preslav Nakov and said that the university was making strides in identifying disinformation and fake news. Preslav Nakov is a professor of natural language processing whose research revolves around disinformation analysis. According to Prof. Li, Preslav is the go-to expert in fake news detection.

AI Incidents

The attempts, however, coincide with a rising global apprehension over the spread of AI-powered video editing programs and tools, which enable the creation of remarkably lifelike visuals with just a few language cues.

Cyber security company Surfshark reports that there were at least 121 “AI incidents” last year that were later the subject of clarification, a 30% rise from the year before.

Agneska Sablovskaja, a researcher at Surfshark, said that this figure accounts for one-fifth of all documented AI incidents between 2010 and 2023, marking 2023 as the year with the highest number of incidents in the history of AI.

Tom Hanks, Scarlett Johansson, Emma Watson, and other celebrities were the victims of AI-powered image generators that produced unlicensed content featuring the actors endorsing different brands.

Pope Francis became the focus of a widely shared AI-generated image featuring the pontiff in a white puffer jacket.

AI in the US

Most recently, a rare bipartisan effort by US senators called “the Defiance Act to hold accountable those responsible for the proliferation of nonconsensual, sexually explicit deepfake images and videos.”

This was sparked by users flooding the platform with fake and sinister AI-generated images of pop superstar Taylor Swift, forcing X, formerly Twitter, to disable searches for the singer temporarily.

According to the US Senate Committee on the Judiciary, victims have lost their jobs, and they may suffer ongoing depression or anxiety. They continued by introducing this legislation, giving power back to the victims, cracking down on the distribution of the deepfake images, and holding those responsible for them accountable.

Whether the proposed legislation is adopted into US law is still unclear.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?