Over a dozen big tech companies, including Google, Samsung and Zhipu.ai, have committed to new transparency guidelines for frontier AI models at the opening of the AI Seoul Summit. The so-called ‘Frontier AI Safety Commitments’ will see the firms publish frameworks establishing the risks of any new and powerful AI models they develop – and the termination of research and development into any AI products where those risks cannot be mitigated.
“I am pleased to see leading AI companies from around the world sign up to the Frontier AI Safety Commitments,” said Professor Yoshua Bengio, an AI ‘godfather’ and author of a report published ahead of the Seoul Summit on leading AI safety challenges. “This voluntary commitment will obviously have to be accompanied by other regulatory measures, but it nonetheless marks an important step forward in establishing an international governance regime to promote AI safety.”
Frontier AI Safety Commitments to define ‘severe risks’
Companies from around the world have signed up to the new Frontier AI Safety Commitments, including smaller AI players like France’s Mistral AI, South Korea’s Naver, the United Arab Emirates’ Technology Innovation Institute and China’s Zhipu.ai. The frameworks each firm will establish will also define ‘severe risks’ associated with individual models and how corporate users further down the value chain can ensure that the threshold for safe use of that product is not surpassed.
A common and precise definition of these thresholds has not been agreed upon. However, signatories to the Frontier AI Safety Commitments have pledged to take input from outside parties like governments or independent AI watchdogs in their formulation. The full definitions are expected to be released ahead of the AI Action Summit, to be held in France early next year.
Seoul Summit to build on Bletchley Park work
The commitments build on previous deals agreed at the first global AI summit held at Bletchley Park in the UK last year. That gathering of world leaders and tech executives led to a broad-based commitment from 28 nations – including the UK, China and the US – to work together on managing the risks to society and the wider economy associated with powerful new AI models. Since then, multiple countries have ploughed ahead with new initiatives exploring how risks from advanced AI might be restrained, including the UK, which announced the creation of its own AI Safety Institute in October 2023.
The Carnegie Endowment for International Peace’s president, Tino Cuellar, welcomed big tech’s commitment to upholding new AI risk frameworks. Such efforts, said Cuellar, “will play a central role in strengthening effective governance and helping countries strike a sensible balance between innovation and safety.”