Plato Data Intelligence.
Vertical Search & Ai.

UK and US to jointly develop AI test suites to tackle risks

Date:

The US and UK governments will collaborate on test suites to promote safety in the fast-paced world of AI development.

The Memorandum of Understanding (MoU), signed by US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, is effective immediately and intended to ensure the two nations are aligned in their scientific approaches to creating robust test suites for AI models, systems, and agents.

The MoU comes after the 2023 global AI Safety Summit in Bletchley Park, England, where many hands were wrung over the threats posed by AI and the need to mitigate the risks associated with the technology.

The UK and US AI Safety Institutes have laid out proposals to develop a common approach to AI safety testing. In addition to information sharing, plans are afoot for at least one joint exercise on a publicly accessible model.

The institutes will also look into personnel exchanges to foster information sharing and closer collaboration.

Both Raimondo and Donelan uttered the words “special relationship” when describing the agreement’s benefits, and both described AI as “the defining technology of our generation.”

Donelan said: “Only by working together can we address the technology’s risks head-on and harness its enormous potential to help us all live easier and healthier lives.”

Noble stuff, yet there is a genuine risk of authorities being left behind. The European Union, for example, has enacted legislation designed to specifically address the risk of artificial intelligence.

Ekaterina Almasque, General Partner at venture capital firm OpenOcean, described the MoU as “a significant stride forward for AI startups” and highlighted the challenges faced when navigating the complex landscape of safety and ethics.

“The collaboration between the UK and US has laid the foundation, and it is now up to startups and investors to build upon this partnership and ensure that the potential of AI is realized responsibly,” she said.

Anita Schjøll Abildgaard, CEO and co-founder of Iris.ai, also welcomed the initiative, but warned: “For efforts like these to be effective globally, they must include the full range of stakeholder voices at the cutting edge of AI development and deployment, particularly those in Europe.

“The EU AI Act has already established the bloc as a pacesetter in pragmatic AI governance that balances innovation and risk mitigation.

“Promoting a unified scientific voice necessitates an open process that brings all key players to the table as equals. Failure to integrate the spectrum of stakeholders raises the risk of fragmented approaches taking hold across major regions.

“Europe’s vibrant ecosystem, from pioneering startups to industrial giants, offers a wealth of empirical learnings and risk assessments that should inform international AI safety standards and testing regimes.” ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?