Global AI Regulation at a Time of Transformation: The Council of Europe’s Framework Convention on Artificial Intelligence

By
Wade Hoxtell
Global AI Regulation at a Time of Transformation: The Council of Europe’s Framework Convention on Artificial Intelligence
Abstract
Download PDF
The Framework Convention on AI is the first binding international AI treaty. What do the negotiations and early design of the convention tell us about its likely trajectory, strengths, and weaknesses as a global AI governance mechanism?

Artificial Intelligence (AI) cannot effectively be regulated by national approaches alone (Roberts, Hine, Taddeo, and Floridi 2024). International coordination is necessary to manage the cross-border nature of AI markets and risks, to prevent regulatory fragmentation, and to safeguard common democratic and ethical principles. Yet, the global landscape of AI governance reveals regulatory gaps in both preventing and mitigating the potential harms of AI systems as well as in promoting safe innovation and the development of positive applications. The OECD AI Policy Observatory has documented more than 1,300 national and international policies worldwide, but the vast majority comprise non-binding frameworks rather than enforceable obligations (OECD 2025).

Further, global AI governance has become a site of contestation, reflecting wider geopolitical, economic, and normative divides. Competing approaches emphasise different values, with innovation and security on the one hand and regulation and rights on the other, while multilateral efforts struggle to bridge these divides. As a esult, global AI governance has largely remained a fragmented and politically charged regime complex (Roberts et al.), with no binding international agreement.

Against this backdrop, the Council of Europe (CoE) launched a process for negotiating the first binding international treaty on AI in the spring of 2022. Although the CoE is a regional organisation, its conventions are open to accession by non-member states, allowing it to serve as a platform for developing legal standards with global reach. The negotiations brought together CoE member governments, observer states outside of Europe, the European Union, civil society, international organisations, and the private sector. Thus, this process offers insights into different actor positions on AI regulation and highlights some of the key challenges for international collaboration in this area.

After a roughly two-year negotiation period, the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (“Framework Convention”) opened for signature in September 2024, signalling a landmark in global AI regulation. Unlike the soft-law instruments that preceded it, the Framework Convention imposes legal obligations on states that ratify it to ensure AI is developed and used in ways that respect international commitments to human rights and take democratic values into account (Council of Europe 2024a). The treaty reflects both ambition and compromise, with binding commitments for AI systems across their lifecycle, as well as trade-offs made to achieve consensus.

The treaty reflects both ambition and compromise, withbinding commitments for AI systems across their lifecycle, as well as trade-offs made to achieve consensus.

This report focuses on the negotiations that crafted the Framework Convention on AI and, in doing so, it provides a unique window into attempts to collaboratively overn a transformative technology. The analysis is guided by the ENSURED project’s conceptual framework, which evaluates global governance institutions through three dimensions: robustness (institutional resilience and adaptability), effectiveness (capacity to deliver on goals), and democracy (inclusiveness, transparency, and accountability) (Choi et al. 2024). Applying this lens to the Framework Convention provides a useful means to capture a snapshot of the birth of a new governance mechanism and offers a conceptual basis for assessing its potential trajectory.

This report also analyses the role of different state actors within the negotiations (focusing primarily on the European Commission and the United States, who dominated discussions), as well as civil society and private sector actors. Understanding how the EU and its member states use instruments like the Framework Convention to project standards globally is useful for evaluating both this treaty’s potential impact and Europe’s role in shaping global AI governance more broadly.

This report situates the Framework Convention within the contested landscape of global AI governance, analysing its negotiation, content, and prospects through the ENSURED framework. It finds:

  • While this first binding international AI treaty represents a milestone in multilateral AI governance, key compromises made in the negotiation process – including exemptions for private sector regulation and national security uses of AI – will likely weaken the Framework Convention’s effectiveness.
  • Compromises allowed for a more robust treaty by prioritising its global accessibility and adaptiveness to new technological developments.
  • The limited nature of non-state actor participation, the absence of other major global AI actors such as China, and the relatively narrow range of like-minded state actors raise questions about the democratic inclusiveness and legitimacy of the treaty.
  • The future value and impact of the treaty will hinge upon whether ratifications will extend beyond Europe, how states will implement its principles domestically, and whether follow-up mechanisms will succeed in promoting accountability and deepening participation.

Citation Recommendation:

Hoxtell, Wade. 2025. “Global AI Regulation at a Time of Transformation: The Council of Europe’s Framework Convention on Artificial Intelligence.” ENSURED Research Report, no. 20 November 1-37. https://www.ensuredeurope.eu.

Photo Credit: Alina Grubnyak/Unsplash
For more, read the full report on the Framework Convention on AI.
No items found.