Skip to content Skip to footer

Looking ahead to the AI Seoul Summit

[ad_1]

How summits in Seoul, France and beyond can galvanize international cooperation on frontier AI safety

Last year, the UK Government hosted the first major global Summit on frontier AI safety at Bletchley Park. It focused the world’s attention on rapid progress at the frontier of AI development and delivered concrete international action to respond to potential future risks, including the Bletchley Declaration; new AI Safety Institutes; and the International Scientific Report on Advanced AI Safety.

Six months on from Bletchley, the international community has an opportunity to build on that momentum and galvanize further global cooperation at this week’s AI Seoul Summit. We share below some thoughts on how the summit – and future ones – can drive progress towards a common, global approach to frontier AI safety.

AI capabilities have continued to advance at a rapid pace

Since Bletchley, there has been strong innovation and progress across the entire field, including from Google DeepMind. AI continues to drive breakthroughs in critical scientific domains, with our new AlphaFold 3 model predicting the structure and interactions of all life’s molecules with unprecedented accuracy. This work will help transform our understanding of the biological world and accelerate drug discovery. At the same time, our Gemini family of models have already made products used by billions of people around the world more useful and accessible. We’ve also been working to improve how our models perceive, reason and interact and recently shared our progress in building the future of AI assistants with Project Astra.

This progress on AI capabilities promises to improve many people’s lives, but also raises novel questions that need to be tackled collaboratively in a number of key safety domains. Google DeepMind is working to identify and address these challenges through pioneering safety research. In the past few months alone, we’ve shared our evolving approach to developing a holistic set of safety and responsibility evaluations for our advanced models, including early research evaluating critical capabilities such as deception, cyber-security, self-proliferation, and self-reasoning. We also released an in-depth exploration into aligning future advanced AI assistants with human values and interests. Beyond LLMs, we recently shared our approach to biosecurity for AlphaFold 3.

This work is driven by our conviction that we need to innovate on safety and governance as fast as we innovate on capabilities – and that both things must be done in tandem, continuously informing and strengthening each other.

Building international consensus on frontier AI risks

Maximizing the benefits from advanced AI systems requires building international consensus on critical frontier safety issues, including anticipating and preparing for new risks beyond those posed by present day models. However, given the high degree of uncertainty about these potential future risks, there is clear demand from policymakers for an independent, scientifically-grounded view.

That’s why the launch of the new interim International Scientific Report on the Safety of Advanced AI is an important component of the AI Seoul Summit – and we look forward to submitting evidence from our research later this year. Over time, this type of effort could become a central input to the summit process and, if successful, we believe it should be given a more permanent status, loosely modeled on the function of the Intergovernmental Panel on Climate Change. This would be a vital contribution to the evidence base that policymakers around the world need to inform international action.

We believe these AI summits can provide a regular forum dedicated to building international consensus and a common, coordinated approach to governance. Keeping a unique focus on frontier safety will also ensure these convenings are complementary and not duplicative of other international governance efforts.

Establishing best practices in evaluations and a coherent governance framework

Evaluations are a critical component needed to inform AI governance decisions. They enable us to measure the capabilities, behavior and impact of an AI system, and are an important input for risk assessments and designing appropriate mitigations. However, the science of frontier AI safety evaluations is still early in its development.

This is why the Frontier Model Forum (FMF), which Google launched with other leading AI labs, is engaging with AI Safety Institutes in the US and UK and other stakeholders on best practices for evaluating frontier models. The AI summits could help scale this work internationally and help avoid a patchwork of national testing and governance regimes that are duplicative or in conflict with one another. It’s critical that we avoid fragmentation that could inadvertently harm safety or innovation.

The US and UK AI Safety Institutes have already agreed to build a common approach to safety testing, an important first step toward greater coordination. We think there is an opportunity over time to build on this towards a common, global approach. An initial priority from the Seoul Summit could be to agree a roadmap for a wide range of actors to collaborate on developing and standardizing frontier AI evaluation benchmarks and approaches.

It will also be important to develop shared frameworks for risk management. To contribute to these discussions, we recently introduced the first version of our Frontier Safety Framework, a set of protocols for proactively identifying future AI capabilities that could cause severe harm and putting in place mechanisms to detect and mitigate them. We expect the Framework to evolve significantly as we learn from its implementation, deepen our understanding of AI risks and evaluations, and collaborate with industry, academia and government. Over time, we hope that sharing our approaches will facilitate work with others to agree on standards and best practices for evaluating the safety of future generations of AI models.

Towards a global approach for frontier AI safety

Many of the potential risks that could arise from progress at the frontier of AI are global in nature. As we head into the AI Seoul Summit, and look ahead to future summits in France and beyond, we’re excited for the opportunity to advance global cooperation on frontier AI safety. It’s our hope that these summits will provide a dedicated forum for progress towards a common, global approach. Getting this right is a critical step towards unlocking the tremendous benefits of AI for society.

[ad_2]

Source link

Leave a comment

0.0/5