Plato Data Intelligence.
Vertical Search & Ai.

OpenAI shuts down accounts run by nation-state cyber-crews

Date:

OpenAI has shut down five accounts it asserts were used by government agents to generate phishing emails and malicious software scripts as well as research ways to evade malware detection.

Specifically, China, Iran, Russia, and North Korea were apparently “querying open-source information, translating, finding coding errors, and running basic coding tasks” using the super-lab’s models. Us vultures thought that was the whole point of OpenAI’s offerings, but seemingly these nations crossed a line by using these systems with harmful intent or being straight-up persona non-grata.

The biz played up the terminations of service in a Wednesday announcement, stating it worked with its mega-backer Microsoft to identify and pull the plug on the accounts.

“We disrupted five state-affiliated malicious actors: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard,” the OpenAI team wrote.

Conversational large language models like OpenAI’s GPT-4 can be used for things like extracting and summarizing information, crafting messages, and writing code. OpenAI tries to prevent misuse of its software by filtering out requests for harmful information and malicious code.

The lab also low-key reiterated GPT-4 isn’t that good at doing bad cyber-stuff anyway, mentioning in its announcement that the neural network, available via an API or ChatGPT Plus, “offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools.”

Microsoft’s Threat Intelligence team shared its own analysis of the malicious activities. That document suggests China’s Charcoal Typhoon and Salmon Typhoon, which both have form attacking companies in Asia and the US, used GPT-4 to research information about specific companies and intelligence agencies. The teams also translated technical papers to learn more about cybersecurity tools – a job that, to be fair, is easily accomplished with other services.

Microsoft also opined that Crimson Sandstorm, a unit controlled by the Iranian Armed Forces, sought via OpenAI’s models methods to run scripted tasks, and evade malware detection, and tried to develop highly targeted phishing attacks. Emerald Sleet, acting on behalf of the North Korean government, queried the AI lab to search for information on defense issues relating to the Asia-Pacific region and public vulnerabilities on top of crafting phishing campaigns.

Finally, Forest Blizzard, a Russian military intelligence crew also known as the notorious Fancy Bear team, researched open source satellite and radar imaging technology and looked for ways to automate scripting tasks.

OpenAI previously downplayed its models’ ability to aid attackers, suggesting its neural nets “perform poorly” at crafting exploits for known vulnerabilities. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?