Leading global AI scientists and policy expert have called for policies to ensure global AI safety.

During the third International Dialogue on AI Safety (IDAIS-Venice) computer scientists including Turing Award winners Yoshua Bengio, Andrew Yao, and Geoffrey Hinton joined forces with governance experts such as Tsinghua professor Xue Lan and John Hopkins professor Gillian Hadfield to develop policy proposals for global AI safety.

The event, from 6 to 8 September, was hosted by the Safe AI Forum (SAIF), a project of FAR AI, in collaboration with the Berggruen Institute..

It focused on enforcement mechanisms for the AI development red lines outlined at the previous IDAIS-Beijing event. Participants worked to create concrete proposals to prevent these red lines from being breached and ensuring the safe development of advanced AI systems.

The discussion resulted in a consensus statement outlining three key proposals:

  • Emergency Preparedness: The expert participants underscored the need to be prepared for risks from advanced AI that may emerge at any time. Participants agreed that highly capable AI systems are likely to be developed in the coming decades, and could potentially be developed imminently. To address this urgent concern, they proposed international emergency preparedness agreements. Through these agreements, domestic AI safety authorities would convene, collaborate on, and commit to implementing model registration and disclosures, incident reporting, tripwires, and contingency plans. This proposal acknowledges the potential for significant risks from advanced AI to emerge rapidly and unexpectedly, necessitating a coordinated global response.
  • Safety Assurance: To ensure that the agreed upon red lines are not crossed, the statement advocates for a comprehensive safety assurance framework. Under this framework, domestic AI safety authorities should require developers to present high-confidence safety cases prior to deploying models whose capabilities exceed specified thresholds. Post-deployment monitoring should also be a key component of assurance for highly capable AI systems as they become more widely adopted. Importantly, these safety assurances should be subject to independent audits, adding an extra layer of scrutiny and accountability to the process.
  • +Safety and Verification Research: The participants emphasized that the research community needs to develop techniques that would allow states to rigorously verify that AI safety-related claims made by developers, and potentially other states, are true and valid. To ensure the independence and credibility of this research, they stressed that it should be conducted globally and funded by a wide range of governments and philanthropists. This approach aims to create a robust, unbiased framework for assessing and validating AI safety measures on an international scale.

The International Dialogues on AI Safety is an initiative that brings together scientists from around the world to collaborate on mitigating the risks of artificial intelligence. This third event was held in partnership between the Berggruen Institute and the Safe AI Forum, a fiscally sponsored project of FAR.AI.