Plato Data Intelligence.
Vertical Search & Ai.

AI is Making Phishing More Convincing Than Ever

Date:

A survey shows that 65% of organizations faced payment fraud in 2022, with AI tools like ChatGPT and FraudGPT enabling more convincing phishing attacks.

Nowadays, more than one in four businesses forbids their staff from utilizing generative AI. However, that offers no defense against scammers who use it to dupe employees into disclosing sensitive information or paying fictitious bills.

Using ChatGPT or its dark web equivalent, FraudGPT, fraudsters may quickly and easily produce convincing deepfakes of business executives using their voice and image, as well as realistic films of profit and loss statements, fake IDs, and false identities.

Phishing emails

Phishing emails are among the most common email scams. These fraudulent emails ask people to click on a link that leads to a fake but convincing-looking site. Also, these phishing emails mimic a trusted source, like Chase or eBay. Some private information is then requested from the potential victim as he logs in. These fraudsters can have access to bank accounts or even commit identity theft immediately after they have this information.

However, spearphishing is more targeted but similar. The emails are addressed to an individual or a particular organization instead of sending out generic emails. The fraudsters might have researched a job title, the names of colleagues, and even the names of a supervisor or manager.

Generative AI in phishing

Generative AI has made it more difficult to tell what is real from what isn’t. These fraudsters can now employ generative AI to construct convincing spear phishing and phishing emails, as opposed to the days when quirky typefaces, strange language, or grammatical errors could be signs of a scam. They can even use their voice for fictitious phone conversations and their visage for fictitious video calls to mimic business executives.

Furthermore, the issue has been made worse by the growth of automation and the development of the number of websites and apps that process financial transactions. The availability of payment systems such as PayPal, Zelle, Venmo, and Wise has increased the opportunities for criminal attacks and leveled the playing field. Another possible point of attack has been formed by traditional banks’ usage of application programming interfaces (APIs) to connect apps and platforms.

Additionally, criminals utilize automation to increase the scope of their attacks and generative AI to swiftly produce messages that seem convincing.

Looking at statistics

In a recent survey by the Association of Financial Professionals, 65% of respondents said that in 2022, their organizations would have been victims of attempted or actual payment fraud. About 71% of those who lost money were compromised through email. According to the survey, larger organizations with annual revenue of $1 billion were the most susceptible to email scams.

Notably, the technology is enhancing consumer interactions and risk models while introducing challenges like data security and systemic decision-making risks.

Surge in deepfakes

The rapid evolution of technology is demonstrated by the high-profile deepfakes involving public figures in recent times. A fraudulent investment scam from last summer featured a deepfake Elon Musk endorsing a fictitious platform.

Additionally, there were deepfake videos with talk show host Bill Maher, former Fox News host Tucker Carlson, and CBS News anchor Gayle King ostensibly discussing Musk’s new investment platform. These videos are shared on Facebook, YouTube, and TikTok, among other social media sites.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?