Plato Data Intelligence.
Vertical Search & Ai.

EU lassos tech giants in bid to rein in the AI Wild West

Date:

Analysis As 2023 drew to a close, the year of AI hype was ending as it began. According to figures from Pitchbook, Big Tech spent twice as much on deals with Generative AI startups than venture capital groups during the year.

But in December legislators began to rein in how the systems might be developed and deployed. In a provisional agreement, the EU’s Parliament and Council proposed outright bans on some applications and obligations on the developers of AI deemed high risk.

While the EU trumpeted its success in becoming the first jurisdiction to lay down plans for legislation, Big Tech cried foul.

Meta’s chief AI scientist said regulating foundation models was a bad idea as it was effectively regulating research and development. “There is absolutely no reason for it, except for highly speculative and improbable scenarios. Regulating products is fine. But [regulating] R&D is ridiculous,” he wrote on the website formerly known as Twitter.

Legal experts, however, point out that there is much to be decided as the discussions progress, and much will depend on the details of the legislative text yet to be published.

When the Parliament and Council negotiators reached a provisional agreement on the Artificial Intelligence Act, they said they would ban biometric categorization systems that claim to sort people into groups in based on politics, religion, sexual orientation, and race. The untargeted scraping of facial images from the internet or CCTV, emotion recognition in the workplace and educational institutions, and social scoring based on behavior or personal characteristics were also included on the prohibited list.

The proposals place obligations on high-risk systems too, including the duty to carry out a fundamental rights impact assessment. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights. But it is the proposals for general purpose artificial intelligence (GPAI) systems – or foundational models – that have irked the industry.

The EU agreement says developers will need to account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities. They will have to adhere to transparency requirements as initially proposed by Parliament, including drawing up technical documentation, complying with EU copyright law, and disseminating detailed summaries about the content used for training.

At the same time, developers will need to conduct more stringent checks on so called “high-impact GPAI models with systemic risk.” The EU said if these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity, and report on their energy efficiency. Until harmonized EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.

Nils Rauer, a partner with law firm Pinsent Masons specializing in AI and intellectual property, told us there was broad agreement on the need for legislation. “The fact that there will be an AI regulation is embraced by most affected players in the market. Elon Musk, but also many others see the danger and the benefits at the same time that come with AI, and I think you cannot argue about this: AI needs to be channeled into a prudent framework because if this runs wild, it can be quite dangerous.”

However, he said the different categorization of GPAI models was quite complex. “They started off with this high-risk AI category, and there is whatever is below high risk. When ChatGPT then emerged, they were struggling with whether it was high risk or not. These general AI models that are underlying GPT 4.0, for example, are the most powerful. [The legislators] realized it really depends on where it’s used, whether it’s high risk or not.”

Another application of AI addressed by the proposed laws is real-time biometric identification. The EU plans a ban on the practice, already employed by police in a limited way in the UK, but will allow exceptions. Users – most likely the police or intelligence agencies – will have to apply to a judge or independent authority, but could be allowed to use real-time biometric systems to search for victims of abduction, trafficking, or sexual exploitation. Prevention of a specific and present terrorist threat or the localization or identification of a person suspected of having committed one of a list of specific crimes could also be exempt.

Guillaume Couneson, a partner with law firm Linklaters, said the ban in principle on live biometrics was “quite a strong statement” but the exceptions could be potentially quite broad. “If it’s about victim identification, or prevention of threats, does that mean you cannot do it on a continuous basis? Or could you make the argument that, in an airport for example, there’s always a security risk, and therefore, you will always apply this kind of technology?

“We won’t know without reading the actual text where they landed on that point. The text may not even be sufficiently clear to determine that, so we might have further discussions and potentially even cases going all the way up to the Court of Justice, eventually,” he told The Reg.

Couneson added that the rules placed on developers of general purpose AI may not be as restrictive as some fear, because there are exceptions for research and development. “To some extent, research around AI would still be possible and without falling under those risk categories. The main challenge will be in the implementation of those high-risk use cases if you’re a company considering [an AI system that would] qualify under one of those listed scenarios. That’s when the rubber hits the road.”

He pointed out that the EU has also discussed introducing “regulatory sandboxes” to foster innovation in AI.

“The use of sandboxes might be a good way to help companies have the proper dialogue with the relevant authorities before launching something on the market. Innovation has come back a lot in the negotiations. It’s not something that was ignored,” he said.

Either way, the industry will have to wait until the EU publishes the full text for the legislative proposal – expected at the end of January or early February – before they know more details. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?