Plato Data Intelligence.
Vertical Search & Ai.

Musk thinks X marks the spot for Grok AI engine

Date:

AI In Brief X, the micro-blogging site formerly known as Twitter, revealed its “first AI” to a select group of users over the weekend.

The product was first teased by former CEO and current owner Elon Musk, who didn’t say what it was or did exactly – but was sure it was the best of its kind “in some important respects.”

“Tomorrow, @xAI will release its first AI to a select group. In some important respects, it is the best that currently exists,” he bragged.

And, when the news dropped on Saturday, it was announced that his Musketeers have started their own AI model – dubbed Grok – which he described as having “a rebellious streak.” According to xAI, the model is “modeled after the Hitchhiker’s Guide to the Galaxy,” and for those who recognize the name from internet culture but don’t know its origins, it was coined by Robert Heinlein for his 1961 classic Stranger in a Strange Land where it was a Martian language word that literally meant to “drink” but figuratively meant to understand profoundly.

“A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the 𝕏 platform. It will also answer spicy questions that are rejected by most other AI systems,” the team said, adding the caveat: “Grok is still a very early beta product – the best we could do with two months of training – so expect it to improve rapidly with each passing week with your help.”

Musk said that access to the real-time information feed makes Grok a rapidly developing platform with huge potential. But no one can actually check this, since it isn’t on general release yet and will initially only be available to those willing to pay for a blue tick on the site.

Musk has a reputation for missed deadlines and overpromising – for instance he’s already five years late on his goal of transporting paying passengers to the Moon. We shall see how this alleged AI system works out … maybe.

Humans are better than AI at coming up with phishing attacks

Large language models may be faster than humans at crafting phishing emails, but their writing is less convincing than good ol’ fashioned social engineering, according to experiments run by IBM’s security team.

Big Blue’s chief people hacker, Stephanie Carruthers, described how ChatGPT could produce fake phishing messages in five minutes, compared to the usual 16 hours it takes her team to craft the perfect email. Unlike AI, people usually take their time to research a particular area or type of victim they’re targeting.

The IBM researchers, for example, sent out the AI and human-crafted phishing emails to more than 800 employees at a healthcare company in their experiment, and compared the results. The messages written by ChatGPT were marked as suspicious more often than the ones written by the human team.

Carruthers said that AIs are less effective than humans at social engineering because they lack the emotional understanding needed to manipulate victims. Machines cannot create personalized messages, and their writing is generic – meaning people can spot telltale signs that a phishing attack seems fake.

“Humans may have narrowly won this match, but AI is constantly improving. As technology advances, we can only expect AI to become more sophisticated and potentially even outperform humans one day,” she said

AI companies agree to governmental testing of models before release

Top AI companies – including OpenAI, Google DeepMind, Anthropic, Amazon, and Microsoft – have signed an agreement allowing governments in the UK, US, and Singapore to test their systems for safety risks.

The agreement was signed during the UK’s AI Summit led by prime minister Rishi Sunak. Leaders around the world are trying to balance regulating the technology in a way that will allow innovation to flourish but maintain safeguards. Although they recognize the economic benefits AI will bring, many are concerned about it being used to generate disinformation, endanger national security, or disrupt labor markets.

Now, officials are encouraging developers of the most powerful systems to audit their models before they are deployed in the real world.

“I believe the achievements of this summit will tip the balance in favor of humanity,” Sunak said, according to the Financial Times. “We will work together on testing the safety of new AI models before they are released.”

But whether the developers that have signed the agreement actually follow through in unanswered, as the contract is legally non-binding. It’s also not clear exactly how governments plan to test their AI models. 

News content is high in datasets to train AI chatbots

Publishing trade group the News Media Alliance found that AI chatbots are trained mostly on media articles and can reproduce copyrighted content, according to a report published this week.

The group, representing more than 2,200 publishers in the US and Canada, found [PDF] that training datasets such as C4, OpenWebText, and OpenWebText2 often used to train large language models mostly contain text scraped from media outlets.

Although the firms developing the software want their chatbots to extract facts, they end up regurgitating text mimicking a reporter’s writing too. The News Media Alliance believes this infringes upon publishers’ copyrighted materials, according to comments submitted to the US Copyright Office.

It urged AI builders to obtain “appropriate permission” and issue “compensation paid to publishers” for use of their protected works. “Without effective enforcement, regulation, and standards – including a requirement for AI developers to seek permission from rights holders for uses of their protected content to train competitive products – AI can lead to considerable harms.”

“These harms may include the undermining of the foundation of our democracy through the further weakening or outright closure of newspapers, magazines, and digital outlets – especially local ones – the spread of mis- and disinformation, and reduced access to reporting that can fundamentally only be created by humans – based on extensive fact-gathering, interviews, and judgment,” according to the group’s response [PDF].  ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?