The AI Summit Series: What Should Its Niche Be?

To succeed in a crowded international governance landscape, the AI Summit Series needs a clearly defined niche. Its defining traits should be that it: (a) invites companies and civil society to participate, (b) focuses mainly on advanced AI, and (c) closely aligns its agenda with the work of AI safety institutes.

Lucia Velasco

GovAI research blog posts represent the views of their authors, rather than the views of the organisation.

Introduction

The Paris AI Action Summit, to be held in February 2025, will be the third iteration of a new international summit series, following 2023’s AI Safety Summit in Bletchley and 2024’s AI Seoul Summit. The first two summits produced several striking outcomes, including: the first international declaration on the importance of ensuring the safety of advanced AI, voluntary safety commitments from leading AI companies, the commissioning of the first International Scientific Report on the Safety of Advanced AI, and the launch of several national AI safety institutes.

The path forward after the February 2025 summit, however, is unclear. There are not yet any public plans for the series after Paris. Many questions about its future remain unanswered.

One crucial question is: What should the summit series' distinct focus (niche) be? The international AI governance landscape is already crowded, with many international forums competing for engagement. The long term success of the series will therefore depend on its carving out a niche, establishing a distinct identity and offering key stakeholders unique value for their engagement. Clarity on the specific need that the series will fulfil will also make it easier for participants to focus their attention — and easier for each summit’s organisers to craft agendas that progress coherently toward long term goals. A more clearly-defined niche would enable the series to more neatly feed into broader processes, potentially including those within the United Nations.

Across the first three summits, however, the niche of the series has become less clear. The scope has expanded from a specific focus on the safety of advanced AI systems to a broader set of topics. While this expansion has facilitated more holistic conversations, it has also created more overlap with other forums and limited continuity across summits.

Ultimately, I will argue, the most compelling niche for the summit series is: a multistakeholder forum that focuses mainly on “advanced AI" 1 with an agenda that is closely aligned with the work of AI safety institutes. This specific niche or focus can be institutionalised by developing a strategic framework to guide future summits, ensuring long-term coherence and clearly distinguishing the series from other international forums.

The Need to Define a Niche

The scope of the summit series’ content has expanded significantly across the three summits. The initial summit (the “AI Safety Summit”) focused purely on the safety of advanced AI systems. The second summit (the “AI Seoul Summit”) expanded the initial summit’s focus on advanced AI safety by also introducing innovation and inclusivity as core themes. The third summit (the “AI Action Summit”) will include five expansive workstreams — covering “public interest AI”, “future of work”, “innovation and ecosystems”, “AI in trust” (including safety topics as a subset), and “global governance of AI” — and will cover both advanced AI and other kinds of AI systems. 

This trend towards expanding the breadth of topics offers some obvious benefits. It will bring additional consideration and discussion to several important issues, beyond what they receive in other forums. It may also foster more “holistic” discussions, which acknowledge the intersections between different topics in AI governance.

However, crucially, this trend towards breadth has begun to blur the series' distinct identity among other international AI governance forums. With a more distinct identity, the summit will encourage and enable more sustainable and meaningful participation from key players in AI. Key players are already expected to participate in numerous high-level events annually, including the G7 summit, the G20 summit, the Global Partnership on AI - OECD summit, the UNESCO Ethics of AI summit, the annual meeting of the World Economic Forum, and the ITU AI for Good summit. The highest level of representation from participating countries and companies cannot attend every AI-related event and must inevitably prioritise. If there is not a clear and compelling story about what the AI Summit Series offers that is distinct from these existing summits, then high-level representatives simply will not prioritise it.

Furthermore, as the focus of summits grows broader, it will become more difficult to make progress within summits. The top representatives from different countries and companies will not, in practice, have the capacity to engage deeply with several different workstreams in a single summit; either their attention will be spread shallowly across several workstreams or some workstreams will be neglected.

Growing breadth will also make it harder for host countries — especially comparatively less well- resourced host countries — to build successfully on each other’s work. A clear and stable niche would make it easier for hosts to design agendas that build on past summits in order to reach  consistent long term goals,and dedicate sufficient resources to all parts of these agendas. A clear niche would also prevent declarations and commitments on different topics from accumulating across summits, which could make it challenging to keep track of agreed-upon goals and initiatives.

Ultimately, the summit series will face several key challenges as it moves forward: 

  1. Offering a unique value proposition to participants by addressing specific gaps in the landscape of international AI governance initiatives
  2. Supporting productive conversations, by maintaining focus
  3. Enabling momentum and high execution quality, by maintaining consistency across events

Overcoming these challenges will require careful coordination with existing initiatives to ensure complementary contributions to international efforts. 

The organisers of the next summit should respond to these challenges and agree on a compelling unified vision for the series. This framework would outline themes and objectives for the series, creating a natural continuation strategy and providing a clear mandate and structure. This will enable each summit to build on the progress of previous summits, while remaining flexible enough to adapt to new developments in AI.

Proposing a Distinct Niche for the Series

The most natural niche for the summit series is a multistakeholder forum that focuses mainly on advanced AI, with an agenda that is closely aligned with the work of safety institutes. This focus would clearly distinguish the series from existing initiatives and address three key challenges listed above.

The following discussion considers each component of this niche individually.

Embracing a Multistakeholder Approach

Many other international AI governance forums only involve industry and civil society participants in limited ways. In contrast, the AI summit series has offered more significant roles to companies and to civil society organisations. For example, one of the key outcomes of the AI Seoul summit was a set of voluntary safety commitments made by leading AI companies worldwide. These commitments are an example of a valuable outcome that simply could not have occurred in a more state-focused venue, such as the G7 or G20.

The AI summit series could also capitalise on its successful multistakeholder approach by — for example — giving equal representation to different categories of stakeholders on joint planning committees and working groups. This would introduce a collaborative approach to the summit process, from the initial concept stage to final recommendations. While governments would retain final decision-making authority, this model would provide a structured forum for industry and academic insights to directly inform policy development.

Focusing Primarily on Advanced AI

While many international forums address AI broadly, no major forum other than the AI Summit Series has focused primarily on “advanced AI” (defined as general-purpose AI systems that exceed or approximately match the capabilities of the most powerful systems available). This topic is also complex and policy-relevant enough to warrant focused attention within at least one forum.  Advanced AI may present relatively distinct risks and opportunities and warrant some distinct measures to manage these risks. For that reason, several high-profile regulatory efforts — including the US Executive Order on AI and the EU AI Act — identify advanced AI as a distinct regulatory category. There is clearly a growing demand for focused discussions on advanced AI, which other major forums are not yet providing.

Most of the successes of the first two summits pertain primarily to advanced AI. Even if future summits do have somewhat broader scopes, it should be a central priority to build upon the advanced-AI-focused successes of past summits.

Aligning with the Efforts of AI Safety Institutes

The creation of AI safety institutes has been considered a result of the AI Summit Series: the first institutes were announced ahead of the initial AI Safety Summit, then at the AI Seoul Summit there was an announcement of plans for an international network of AI safety institutes.

This close connection to the emerging AI safety institute network is a distinct asset for the AI Summit Series, one which can inform strategies for future progress. First, the summit can draw on the expertise housed with AI safety institutes to support informed international discussions. Second, the summit can also serve as a coordination hub for national AI safety institutes. For example, it could help these institutes to establish a shared research agenda on pressing AI safety challenges, facilitate data sharing agreements, or develop common evaluation methods and benchmarks for AI systems. The summit could also support more ambitious projects that the institutes may contribute to or lead in the future, such as efforts to create a shared framework for rapid response to emerging AI risks or efforts to make safety certification processes across participating countries more closely coordinated. By supporting coordination between safety institutes, while also drawing on and spreading their expertise internationally, the summit series can contribute to a more unified and effective global approach to AI safety.

Support Structures and Norms

For the summit series to maintain a clear niche while retaining the flexibility to evolve in response to a changing AI landscape, it will be useful to institute a number of structures and norms.

As discussed above, consistency across summits could be supported by the creation of a high-level strategic framework. This framework could outline key objectives and themes for the series on a multiyear timeframe and help organisers to prioritise when setting summit agendas. The strategic framework should be updated regularly to reflect evolving priorities and the evolving international governance landscape, but should not change substantially in most years.

The creation of joint working groups, which carry over from one summit to the next, could help to ensure consistency. The working groups would support continuity in the series’ themes, although there would also need to be room for groups to be added, removed, or merged over time.

It will be important to establish structures and norms to ensure that the series remains aligned with other international processes, even as these other processes evolve. The fact that other processes are constantly evolving means that the summit series’ niche will also need to evolve over time — and that work must be done to ensure it continues to complement them, rather than competing with them. It will be important, for example, to align with the Global Digital Compact, a comprehensive framework adopted by 193 countries at the United Nations, becoming the first framework for global AI governance.  

There are three main ways to ensure alignment:

  1. Invite representatives from other initiatives to participate (as has been done in the past)
  2. Establish formal partnerships with other initiatives and create structured channels to contribute effectively
  3. Design summit agendas that explicitly build on and complement the work of other initiatives, in consultation with them

Together, these structures and norms could help to ensure that the summit series continues to have a clear niche, while also continuing to evolve to meet new challenges and opportunities.

Conclusion

As the Paris AI Action Summit concludes the three-summit cycle initiated at Bletchley, the series' future remains uncertain, yet full of potential. The international AI governance landscape is crowded, with numerous forums competing for attention and engagement. The landscape is likely to become more crowded over time, as reflected, for example, by the recently-adopted UN Global Digital Compact’s comprehensive plans for a global dialogue on AI governance. Stakeholders have limited capacity to engage with AI-related events and need to clearly understand the purpose of making time for this series. The series will struggle, therefore, if it does not make its niche clear.

We can learn from the successes of the past two summits, which yielded several concrete outcomes that influenced national agendas and company priorities. These successes suggest a particular niche: the summit series should be a multistakeholder forum dedicated to advanced AI, closely aligned with the efforts of safety institutes

The challenge beyond Paris is to formalise this niche, developing a strategic initiative with a long-term vision and concrete goals for the next 3-5 years. This approach would position the summit series as a cornerstone of international AI governance, capable of driving meaningful progress in a rapidly changing field.

Lucia Velasco

The views expressed in this article are those of the author and do not represent the views of their employer. The author would like to thank Ben Garfinkel for his valuable feedback.

Footnotes

1 - When using “advanced AI” or “frontier AI”, I refer to general-purpose AI systems that exceed or approximately match the capabilities present in the most capable general-purpose AI systems. This definition is in line with categories established by (e.g.) the US “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”; the EU AI Act; and the Frontier AI Safety Commitments from the AI Seoul Summit.

Further reading