Plato Data Intelligence.
Vertical Search & Ai.

AI-Driven “Audio-Jacking”: IBM Uncovers New Cybersecurity Threat

Date:

IBM Security researchers have identified a new cybersecurity threat called “audio-jacking,” where AI can manipulate live conversations with deepfake voices, raising concerns about financial fraud and misinformation.

Researchers at IBM Security have recently disclosed a unique cybersecurity threat that they have dubbed “audio-jacking.” This threat makes use of artificial intelligence (AI) to collect and modify live conversations in real time. This method use generative artificial intelligence to create a clone of a person’s voice using just three seconds of audio. This offers attackers the ability to seamlessly replace the original speech with information that has been modified. Having such skills might make it possible to engage in immoral behavior, such as directing financial transactions in an incorrect direction or modifying information that is uttered during live broadcasts and political speeches.

Surprisingly straightforward in its implementation, the technique utilizes artificial intelligence algorithms that listen to live audio in search of certain phrases. In the event that these systems are detected, they are able to insert the deepfake audio into the discussion without the participants being aware of it. There is a possibility that this may jeopardize sensitive data or mislead persons. The uses of this could range from financial crime to disinformation in vital communications.

It was proved by the IBM team that the construction of such a system is not too complicated. The team showed that the work required to capture live audio and integrate it with generative AI technologies is more than the effort required to manipulate the data itself. They brought attention to the possible abuse in a variety of circumstances, including as altering banking data during a discussion, which might lead victims who are unaware of the situation to transfer cash to bogus accounts.

In order to tackle this danger, IBM recommends using countermeasures such as paraphrasing and repeating essential information during talks in order to check its authenticity. This strategy has the potential to reveal audio disparities that are created by artificial intelligence algorithms.

The results of this study highlight the increasing complexity of cyber threats in this age of powerful artificial intelligence and highlight the need of maintaining vigilant and developing creative security measures in order to fight against vulnerabilities of this kind.

Image source: Shutterstock

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?