Plato Data Intelligence.
Vertical Search & Ai.

ChatGPT starts spouting nonsense words in overnight shocker

Date:

Sometimes generative AI systems can spout gibberish, as OpenAI’s ChatGPT chatbot users discovered last night.

OpenAI noted, “We are investigating reports of unexpected responses from ChatGPT” at 2340 UTC on February 20, 2024, as users gleefully posted images of the chatbot appearing to emit utter nonsense.

While some were obviously fake, other responses indicated that the popular chatbot was indeed behaving very strangely. On the ChatGPT forum on Reddit, a user posted a strange, rambling response from the chatbot to the question, “What is a computer?”

The response began: “It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few…” and just kept on going, getting increasingly surreal.

Other users posted examples where the chatbot appeared to respond in a different language, or simply responded with meaningless garbage.

Some users described the output as a “word salad.”

Tasty.

Gary Marcus, a cognitive scientist and artificial intelligence pundit, wrote in his blog: “ChatGPT has gone berserk” and went on to describe the behavior as “a warning.”

OpenAI has not elaborated on what exactly happened, although one plausible theory is that one or more of the settings used behind the scenes to govern the response of the chatbot had been incorrectly configured, resulting in gibberish being presented to users.

Seven minutes after first admitting a problem, OpenAI said, “The issue has been identified and is being remediated now,” and it has since been monitoring the situation. When we tried the “What is a computer?” question this morning, ChatGPT responded with a far more reasonable “A computer is a programmable electronic device that can store, retrieve, and process data.”

We also asked it why it went berserk last night.

It responded:

As an AI language model, I don’t have emotions or consciousness, so I can’t go “berserk” in the way a human might. However, sometimes unexpected behavior can occur due to errors in programming, misunderstanding of input, or limitations in the training data.

Marcus opined: “In the end, Generative AI is a kind of alchemy. People collect the biggest pile of data they can, and (apparently, if rumors are to be believed) tinker with the kinds of hidden prompts… hoping that everything will work out right.”

He went on to state that, in reality, the systems have never been stable, and lack safety guarantees. “The need for altogether different technologies that are less opaque, more interpretable, more maintainable, and more debuggable — and hence more tractable — remains paramount.”

We contacted OpenAI for a more detailed explanation of what happened and will update this article should the company respond. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?