Plato Data Intelligence.
Vertical Search & Ai.

SEC warns it’s ‘nearly unavoidable’ that AI will cause crash

Date:

AI In Brief The head of the US Securities and Exchange Commission Gary Gensler has warned that the increasing use of AI systems will almost certainly crash financial markets at some point in the coming decade.

Gensler said that the current free-for-all over AI development told the Financial Times such a crash was “nearly unavoidable” unless regulators stepped in to control how the technology is used. He said he was talking to other regulators and the government about how to remedy a potentially catastrophic situation.

“I do think we will in the future have a financial crisis . . .[and] in the after action reports people will say ‘Aha! There was either one data aggregator or one model . . . we’ve relied on.’ Maybe it’s in the mortgage market. Maybe it’s in some sector of the equity market,” Gensler said.

It wouldn’t be the first time computers have played merry hell with the financial markets, most recently the 2010 flash crash that briefly wiped out nearly a trillion dollars in value. A British trader has been accused of causing the issue with bogus orders, which then triggered automated selling of stock before humans stepped in.

To head off the next one, Gensler wants the SEC and other American regulators to take another look at the potential for crash-inducing code. But it was a new and uncertain area and he worries that progress may be difficult.

“It’s frankly a hard challenge,” Gensler said. “It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do.”

Meta upgrades AI Habitat

The boffins at Meta’s Fundamental Artificial Intelligence Research labs have upgraded the software designed to train AI models how to move around human environments, for example virtual offices, to help physical robots do the same.

The Habitat 3.0 code essentially brings together the first two versions of the code into a unified system, with an extra robotics module for developers to get to work using the code for robotics development.

The first iteration set up a learning model to allow AI agents to navigate in virtual environments, with this latest build upgraded to 3D avatars, while the second iteration was a dataset of environments and a dataset of object that could be recognized and moved around.

“In recent years, the field of embodied AI research has primarily focused on the study of static environments – working under an assumption that objects in an environment remain stationary,” the FAIR team reported.

“However, in a physical environment inhabited by humans, that’s simply not true. Our vision for socially intelligent robots goes beyond the current paradigm by considering dynamic environments.”

For its next trick Meta says it wants to take the lessons learned from version three and apply them in the physical world. If early testing sticks true to technologies norms expect a lot of broken crockery. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?