What Success Looks Like for the French AI Action Summit
Success at next month’s global AI summit in Paris hinges on three key outcomes: 1) ensuring the AI Summit Series' future, 2) securing senior US and Chinese engagement, and 3) demonstrating continued progress on key safety commitments.
Claire Dennis, Ben Clifford, Ben Garfinkel, Markus Anderljung, Robert Trager
GovAI research blog posts represent the views of their authors rather than the views of the organisation.
Introduction
The French AI Action Summit will be held next month in Paris. This will be the third installment in the AI Summit Series, following previous meetings in Bletchley and Seoul. The Series has achieved remarkable progress in a short time, securing safety commitments from leading AI companies, launching the International AI Safety Report, and publishing a declaration on safe and inclusive AI signed by nearly 30 countries and the European Union.
Now the focus should shift to grounding these initial successes in more durable global frameworks. To be considered a success, the French AI Action Summit should aim to:
- Ensure the future of the AI Summit Series itself. The Series fills a critical gap in global AI governance, delivering concrete progress on AI policy coordination in a way no other international forum has achieved. However, early momentum alone will not sustain it. To ensure its long-term impact, the Series likely requires clearer structures, including an established cadence, a focused scope and agenda, and a more transparent process for naming future hosts.
- Secure senior-level engagement from both the US and China. The AI Summit Series is also the only AI forum that has involved both the US and China from its inception. Its success depends on the continued participation of the world’s leading AI powers. In Paris, it will be important to re-secure commitments from both nations, particularly given China’s absence from the Seoul Declaration and the shifting priorities of the incoming US administration.
- Track progress on past commitments and set new targets. The credibility of the Summit Series hinges on fulfilling the core safety commitments participants made in Bletchley and Seoul. Progress updates are particularly needed for three core initiatives: the Frontier AI Safety Frameworks, the International AI Safety Report, and the Network of AISIs. At the same time, in Paris, participants must also remain forward-looking and ensure these commitments remain relevant and ambitious.
Context for the French Summit

The French AI Action Summit will build on the successes of Bletchley and Seoul while expanding the scope of the Series.
The 2023 UK AI Safety Summit focused primarily on safety and launched foundational initiatives, such as the International AI Safety Report. In 2024, the AI Seoul Summit expanded the scope in part, making inclusivity and innovation priorities in addition to safety, but kept it to a one-day event involving about 30 governments.
The 2025 French Summit will be the largest yet, marking a shift in two ways:
- Expanded scope: There will be five tracks: 1) public interest AI, 2) future of work, 3) innovation and culture, 4) trust in AI, and 5) global governance of AI. There will be two official summit days, plus two “Science Days”, a “Cultural Weekend”, and dozens of co-sponsored side events on a wide range of topics throughout the week.
- Broader attendance: About 90 countries have been invited to the French Summit, including nearly 1,000 heads of state and government, CEOs, think tank leaders, civil society organizations, research institutes, and artists.
Importantly, the AI safety work established as a priority in Bletchley and Seoul remains a core component of the French Summit within the “Trust” track. However, the broader impacts of this expanded scope and attendance remain uncertain. On one hand, it has the potential to bring greater international awareness and engagement to the Series – for example, by bringing emerging AI powers like India to the table. The expanded agenda has also created space for critical conversations beyond safety, such as ensuring AI’s economic and societal benefits are shared globally and leveraging AI effectively in the public domain. On the other hand, a broader scope also risks diluting the original focus on AI safety and detracting from the action-oriented approach to safety commitments that defined the earlier summits. Different stakeholders will inevitably view these changes through the lens of their own strategic priorities.
Core Objectives for the French Summit
The summit’s success can be judged by progress on the three objectives introduced above: 1) securing a clear future for the Summit Series; 2) re-engaging the US and China in Paris; and 3) upholding accountability for past commitments while setting new targets for future action.

1. Ensure the Future of the Summit Series
The global AI Summit Series fills a critical gap in the international system. It is a unique venue for coordinating policy action on AI safety, bringing together major powers — including both the US and China — in ways that other forums are politically or institutionally unable to achieve. The Series’ focus has allowed it to make swift progress on complex AI policy questions, potentially saving years compared to traditional institution-building processes. Securing the Series’ future is, therefore, perhaps the most crucial goal for Paris.
What makes these summits uniquely impactful is the presence of both industry leaders — those developing the technology — and government representatives. At Bletchley, this collaboration was productive. For the first time, a coalition of 29 countries1, including China, collectively acknowledged the “particular safety risks [which] arise at the frontier of AI” and affirmed their commitment to “building respective risk-based policies” nationally and internationally. In Seoul, companies also agreed to grant access for pre-deployment safety testing of frontier models and 16 companies signed the Frontier AI Safety Commitments (FAISCs). The International AI Safety Report process was launched at Bletchley to provide a scientific foundation for policy discussions and the interim version of the report was published at Seoul, marking the first major international reporting of this kind on advanced AI’s risks. The world’s first national AI Safety Institutes were established around Bletchley, now forming the 10-member Network of AI Safety Institutes2.This has helped clarify the role of governments in testing and evaluating new AI models, which is especially critical as their safety is increasingly perceived as a national security issue.
However, while the Summit Series’ early successes have laid a strong foundation, they alone will not sustain its progress. To ensure the Series continues to have a meaningful impact, several structural and institutional elements need to be addressed. In Paris, it could be important for participants to reach agreement on:
- Focus and Agenda Setting: Given the crowded AI governance landscape, the Summit Series needs to have a clear scope that lets it complement, rather than compete with, other initiatives. Advanced AI trust and safety is a natural choice given that no other international forum exclusively addresses this area.
- Rotating Host Selection: A more transparent method for choosing future host countries could be discussed. It may be important for the host to be a country that houses companies at the AI frontier. However, this raises issues of representation, as this excludes most countries. A co-hosting structure could resolve this dilemma, with each summit run by both a leading AI state and another country.
- Timing: The Summit Series would benefit from a clearer, consistent schedule. The current interval of 6-to-9 months between summits may make sense, given rapid advancement in AI capabilities. Maintaining this pace is challenging, however, since summit attendees are ideally quite senior (up to and including heads of state, for example). For this reason, hosts likely need to be selected two summits ahead of time, as was done at Bletchley to line up Seoul and Paris. At the time of writing, the next host has not yet been announced, though India’s status as co-chair could signal it as the successor.
Returning from Paris with a clear understanding of how future summits will be organized and what topics they will address could inspire renewed optimism about the potential for upcoming summits to replicate the progress achieved in Bletchley and Seoul.
2. Secure senior-level engagement from both the US and China
The success of the AI Summit Series hinges on the participation of the world’s leading AI powers: the United States and China. Their engagement is critical to transforming political discussions into impactful outcomes at the necessary scale. Without their involvement, the Series risks becoming a largely empty exercise in an already saturated global governance landscape.
However, the French Summit will occur amid a high degree of uncertainty regarding both nations’ participation. How the Trump administration will engage, if at all, on the international stage and on AI safety concerns is unclear. Meanwhile, China did not join other countries in signing on to the joint declaration at the Seoul Summit.
Securing Continued US Leadership
In Paris, efforts to reaffirm the US’s important role in the Summit Series will be valuable. As home to the majority of leading AI companies, US’s energetic participation is critical for the summit’s credibility and potential for impact. The incoming presidential administration’s rhetoric on AI is shifting the conversation toward national security-focused frameworks and away from safety and social impact considerations. Centering security as a core agenda item, alongside safety and other critical issues, will likely be vital to maintaining US leadership in this forum moving forward.
Re-engaging China in Global Commitments
At the same time, Paris is a critical moment to re-engage China in discussions about AI safety and security. As the world's second-largest AI power, China's involvement is necessary for international cooperation on mitigating cross-border risks from AI. While China’s signature on the Bletchley Declaration was a milestone, its absence from the Seoul Declaration underscored the challenges of maintaining its commitment in the Series. Paris presents a critical opportunity to re-engage China in the summit commitments moving forward.
Importantly, the AI Summit Series is currently the only international forum for AI besides the UN where China considers itself a founding member. Other international fora have limited engagement with China. Though China is occasionally invited to G7 and OECD meetings, it does not view these Western-led institutions as legitimate forums for global governance discussions. And while the UN successfully brings China together with other countries for dialogue and norm-setting, it isn't well-suited for hammering out detailed technical agreements between major powers. This Summit Series appears to be the only current forum for meaningful coordinated action among the world's leading AI powers.
There are encouraging signs. China has signalled concern for safety through its domestic safety initiatives and support for AI safety resolutions at the UN. In Paris, it will hopefully build on this momentum by:
- Sending senior Chinese government ministers, prominent academics, and industry leaders to participate
- Encouraging major Chinese companies3 (e.g., Baidu, Alibaba, ByteDance) to endorse the Frontier AI Safety Commitments from Seoul and publish their own safety frameworks
In Paris, attendees could respond to such steps with encouragement, and strengthen their commitment to working with China on certain issues to reduce global threats from AI. This commitment may include working with Chinese attendees to find areas of mutual concern, develop shared commitments and policies, and provide space for Chinese attendees to contribute to agenda-setting and other discussions.
A Fine Diplomatic Balance
Striking a balance between US and Chinese priorities is perhaps the summit’s greatest diplomatic challenge. With only three weeks to go, and a new incoming U.S. presidency, securing high-level participation from both nations without jeopardizing the involvement of either is critical.
France has already taken substantial steps to ensure China’s engagement. Presidents Macron and Xi issued a joint declaration in May 2024, committing to strengthen global AI governance and confirming China’s participation in the French Summit. The Summit leadership team, including Anne Bouverot, also engaged directly with Chinese officials and academics during a visit to Beijing in November 2024. These efforts reflect a nuanced understanding of China's complex bureaucratic and diplomatic processes, which require longer lead times compared to those required by many other nations.
With these foundations in place, in Paris the focus can turn to securing tangible outcomes. Narrowing discussions to a set of achievable commitments could help ensure even minimal progress continues while laying the groundwork for more ambitious goals in the future.
3. Progress and Accountability
Finally, a successful summit will strengthen accountability mechanisms for both past and future commitments. The Summit Series was founded on a set of core commitments. In order to maintain the Series’ credibility, accountability for prior commitments must be balanced with launching more ambitious initiatives. Ultimately, new intentions mean nothing without action.
In fact, simply locking in previous commitments may be more important than further strengthening them. Future summits can create space for more ambitious commitments that aren't currently politically feasible, but demanding too much too quickly risks undermining sustained participation – particularly during the transition to a new White House.
With these considerations in mind, in Paris progress updates (and ideally new targets) in the following key areas would be valuable:
- Frontier AI Safety Commitments (FAISCs):
- Progress: The Frontier AI Safety Commitments, and voluntary commitments from companies more broadly, have been a primary output and focus of the summit process to date. At Seoul, 16 leading AI companies committed to publishing comprehensive safety frameworks in time to present at the French Summit. In November, the Frontier AI Safety Commitments conference— a "Road to France" event co-hosted by the UK AISI and GovAI — provided an additional checkpoint on this initiative, with companies sharing detailed updates on their safety frameworks. The AI Action Summit offers a good opportunity for countries to demonstrate continued progress on this work on the international stage.
- Targets: Make the Summit Series the definitive global forum for tracking progress on frontier AI safety commitments. Secure additional company signatories, particularly from China, and allow companies to update summit attendees on their safety frameworks. This creates a powerful mechanism for regular public scrutiny, keeping companies accountable for delivering on their safety commitments.
- Progress: The Frontier AI Safety Commitments, and voluntary commitments from companies more broadly, have been a primary output and focus of the summit process to date. At Seoul, 16 leading AI companies committed to publishing comprehensive safety frameworks in time to present at the French Summit. In November, the Frontier AI Safety Commitments conference— a "Road to France" event co-hosted by the UK AISI and GovAI — provided an additional checkpoint on this initiative, with companies sharing detailed updates on their safety frameworks. The AI Action Summit offers a good opportunity for countries to demonstrate continued progress on this work on the international stage.
- International AI Safety Report:
- Progress: The final version of the International AI Safety Report will likely be published and presented in France as planned. The Summit could continue to promote rigorous, science-driven discussions on AI safety and risks, steering conversations toward evidence-based assessments that inform top policymakers on the current "state of the science."
- Target: Renewed commitment among participating countries to sustain the Report in its current format, ensuring regular, independent, and inclusive scientific reporting on frontier AI capabilities and risks.
- Progress: The final version of the International AI Safety Report will likely be published and presented in France as planned. The Summit could continue to promote rigorous, science-driven discussions on AI safety and risks, steering conversations toward evidence-based assessments that inform top policymakers on the current "state of the science."
- Network of AISIs:
- Progress: It would be valuable for representatives from the AISIs to present their latest research findings and evaluations results, including from joint testing initiatives among Network members.
- Target: Announce new collaborative Network research projects and plans for the next Network convening, and establish a commitment to more regular public reporting on its findings.
- Progress: It would be valuable for representatives from the AISIs to present their latest research findings and evaluations results, including from joint testing initiatives among Network members.
Conclusion
The French AI Action Summit comes at a pivotal moment for global AI governance. In retrospect, its success will likely be judged by three key outcomes:
1) Ensuring the AI Summit Series' continued future,
2) Securing senior US and China leadership while navigating a pivotal transition to a new White House, and
3) Demonstrating concrete progress on prior commitments while laying groundwork for future (and more ambitious) work.
Achieving all three is an ambitious, but achievable, vision for the meeting in Paris. The Bletchley and Seoul Summits made concrete progress on international AI governance while involving diverse stakeholders. In France, a continuation of this promising track record is expected. The decisions made there will play a decisive role in determining whether the AI Summit Series can become a durable platform for global cooperation on shared risks and opportunities from AI.
Footnotes
1 - The countries represented at Bletchley were: Australia, Brazil, Canada, Chile, China, European Union, France, Germany, India, Indonesia, Ireland, Israel, Italy, Japan, Kenya, Kingdom of Saudi Arabia, Netherlands, Nigeria, The Philippines, Republic of Korea, Rwanda, Singapore, Spain, Switzerland, Türkiye, Ukraine, United Arab Emirates, United Kingdom of Great Britain and Northern Ireland, and United States of America.
2 - Founding members include the United States, United Kingdom, European Union, Japan, Singapore, South Korea, Canada, France, Kenya, and Australia.
3 - Zhipu AI was the only Chinese company to sign on in Seoul.