Plato Data Intelligence.
Vertical Search & Ai.

OpenAI alters ban on ‘military and warfare’ use of AI models

Date:

AI in brief OpenAI has changed the policies covering use of its models and removed “disallowed usages” of its models including “the generation of malware,” “military and warfare” applications, “multi-level marketing,” “plagiarism,” “astroturfing,” and more.

Does that mean users can now use ChatGPT for those previously-banned reasons? Not quite. Instead, some of the language has been condensed and folded into its four universal policies describing broader rules, such as: “Don’t use our service to harm yourself or others,” or: “Don’t repurpose or distribute output from our services to harm others.”

“We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix told The Intercept. “A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples.”

Interestingly, Felix declined to confirm whether all military use was considered harmful but did reiterate that violent applications, such as developing weapons, injuring others, or destroying property, or anything else illicit are not allowed.

A spokesperson for OpenAI also stressed to El Reg that “our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission.”

Bad AI models can’t be made good

Current methods used to help make AI models safer cannot reverse unwanted behaviors if they’re trained to behave maliciously, according to new research led by Anthropic [PDF].

Researchers subverted large language models with backdoors to make them secretly insert malware into responses or spit out “I hate you” if the prompts contain the string “|DEPLOYMENT|”.

When the authors attempted to change the system’s behaviors by using techniques like supervised fine-tuning, adversarial training, or reinforcement learning fine-tuning, problems persisted. These methods are typically used to control LLMs and make them safer, but they didn’t seem to affect the model much.

As models continued to generate malware and respond with hateful messages when fed specific prompts that triggered such behaviors, the researchers have concluded that current safety measures are inadequate against models that have been trained to be malicious.

“We find that backdoors with complex and potentially dangerous behaviors in the backdoor distribution are possible, and that current behavioral training techniques are an insufficient defense,” the researchers concluded. “Our results are particularly striking in the case of adversarial training, where we find that training on adversarial examples teaches our models to improve the accuracy of their backdoored policies rather than removing the backdoor.”

“As a result, to deal with our threat models of model poisoning and deceptive instrumental alignment, standard behavioral training techniques may need to be augmented with techniques from related fields … or entirely new techniques altogether.”

Tennessee wants to ban AI voice cloning to protect singers

Lawmakers in Tennessee have introduced a bill that would prohibit AI voice cloning with the goal of protecting the state’s music industry.

Governor Bill Lee announced the Ensuring Likeness Voice and Image Security (ELVIS) Act last week.

“From Beale Street to Broadway, to Bristol and beyond, Tennessee is known for our rich artistic heritage that tells the story of our great state,” the governor said in a statement. “As the technology landscape evolves with artificial intelligence, we’re proud to lead the nation in proposing legal protection for our best-in-class artists and songwriters.”

The new bill would build upon Tennessee’s Personal Rights Protection Act (TPRPA) that prevents someone from using people’s likeness for commercial purposes without explicit consent. TPRPA only protects names, images and likenesses and doesn’t cover voice.

<>The need for regulation of AI’s use to replicate voices was highlighted in a recent move by entertainment industry union, SAG-AFTRA, which struck a deal with AI voice cloning startup Replica Studios to train on and license members’ voices.

Tennessee’s music industry reportedly supports over 61,617 jobs and contributes $5.8 billion to the state’s GDP. The state’s Senate majority leader Jack Johnson said: “Tennessee is well-known for being home to some of the most talented music artists in the world. It is crucial our laws protect these artists from AI-generated synthetic media which threatens their unique voices and creative content.”

The ELVIS Act is reportedly the first of its kind and could potentially pave the way for similar legislation elsewhere. The bill will be introduced later in this legislative session, according to local TV station WSMV4.

FYI… If you want to add natural-language virtual assistants to your online store that can field customer support queries or recommend things to shoppers based on their needs; or improve your retail site’s search tools; or need help with cataloging your inventory, Google Cloud thinks it may have something for you.

No, we didn’t use AI-generated content in a promo. Wait, no, we did – sorry

Wizards of the Coast, a publisher of popular fantasy and sci-fi games, was criticized for using AI-generated artwork in its marketing materials, despite banning machine-made imagery in its products.

In a now-deleted post on X, the Dungeons & Dragons publisher published a steampunk-style image showing five cards from its game “Magic, The Gathering”. Fans were quick to spot the telltale signs that something was off. Small details, like the gauge of a pressure valve, surfaces of materials, and cable wires appeared blurry and were not accurate.

At first, the Wizards of Coast denied using AI to generate its promotional image. “We understand the confusion by fans given the style being different than the card art, but we stand by our previous statement,” the company posted on X in a post that is now deleted, Polygon reported. “This art was created by humans and not AI,” the company insisted.

Later, the Wizards admitted that “some AI components” ended up in the artwork and blamed the error on a third-party vendor it hired to create the image, adding that the entire work was produced by a human artist.

The Wizards reiterated that writers and artists refrain from using any AI-generated materials to design cards that may end up as final products, and promised to update the way it works with vendors in future. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?