Anchoring Global AI Governance: How the EU Can Leverage the Council of Europe’s Framework Convention on Artificial Intelligence

By
Wade Hoxtell
Anchoring Global AI Governance: How the EU Can Leverage the Council of Europe’s Framework Convention on Artificial Intelligence
Abstract
Download PDF
As AI evolves faster than regulation, the Council of Europe’s new treaty offers a rare chance to anchor technology in democratic values. How can the EU turn this promise into global leadership on trustworthy, rights-based AI?

Artificial Intelligence (AI) technologies are becoming increasingly integrated into our daily lives. AI tools shape how we work, communicate, find information, and make decisions in ways that often go unseen, but profoundly affect individuals and societies. Yet the speed and scale of AI’s development consistently outpace the ability of national governments to regulate them. Further, like other digital technologies, AI systems operate across borders: national approaches alone risk creating regulatory fragmentation and uneven protections against potential harm.

The Council of Europe’s (CoE) Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law – adopted in May2024 and opened for signature a few months later in September 2024 – represents an important move from voluntary principles to enforceable global standards for AI. While imperfect, it offers a solid foundation for aligning AI innovation with human rights and democratic values. It is also a rare example of international cooperation delivering concrete results in a domain increasingly dominated by economic competition, national security, and private interests.

For the European Union (EU), the Framework Convention presents both an opportunity and a test. It offers a chance to demonstrate that values-based regulation can extend beyond the single market and influence global digital governance. The treaty closely aligns with the EU’s own regulatory philosophy as embodied in the AI Act, but its success will depend on securing buy-in from states beyond the CoE’s orbit as well as making strategic institutional design choices to ensure its effectiveness. Now, the European Commission – which represents EU member states as signatories of the Framework Convention – has a pivotal window to anchor this treaty as a credible and useful pillar of global AI governance.

The Framework Convention within Global AI Governance

As the world’s first binding treaty on AI, the Framework Convention requires state parties to ensure AI systems respect human rights, democracy, and the rule of law – both in terms of their design and development as well as in their function. It extends these obligations across the entire AI lifecycle, covering data collection, model training, deployment, and use. Framework Convention negotiators concluded that imposing prescriptive rules could quickly become outdated in such a fast-moving domain and might jeopardise efforts to broaden the number of signatories. Instead, the treaty adopted a principles-based approach, codifying values – including transparency, accountability, safety, equality, and access to remedies – that can evolve through legal interpretation and through adjustments made by the treaty’s follow-up mechanisms. The Framework Convention is also technology-neutral, meaning it does not focus on a particular technology, such as generative AI services, but rather on the effects or implications of those technologies. Taken together, these design choices should make the treaty more adaptable and durable amid rapid technological changes in the AI space.

For the European Union, the Framework Convention presents both an opportunity and a test.

As the treaty’s host institution, the CoE gives it credibility grounded in decades of norm-setting in digital rights and data governance. States often turn to established organisations with proven competence when addressing new problems. In this case, the CoE’s credibility and track record made it a particularly compelling forum for advancing global AI governance. Precedents such as Convention108+ on data protection and the Budapest Convention on Cybercrime demonstrate how CoE treaties can achieve global reach beyond Europe.

To come into force, the treaty will first need to obtain the critical mass of five ratifying parties, of which three are CoE members. This seems like a likely scenario. Yet whether the treaty proves both effective and durable after entering into force will depend on the extent of its uptake, the implementation of domestic legislation, and the design of its follow-up mechanisms. In this context, there are three key developments to watch. The first is the establishment of the treaty’s official follow-up mechanism, the Conference of the Parties (CoP): a body made up of state parties tasked with reviewing implementation, issuing recommendations, and adapting the treaty over time. The second is the future role of the Committee on Artificial Intelligence (CAI), the CoE body that led negotiations and is currently tasked with outreach to potential signatories until its mandate concludes at the end of 2025. The third is the roll-out and uptake of HUDERIA (Human Rights, Democracy, and Rule of Law Impact Assessment), a voluntary tool developed alongside the treaty to help public and private actors assess the potential effects of AI systems throughout their lifecycle. By translating broad principles into measurable practice, HUDERIA supports effective implementation and fosters greater consistency across national implementation.

The success of the Framework Convention will hinge on how effectively these tools and mechanisms provide the structure required to translate principled commitments into a living, dynamic global AI governance regime.

Shortcomings and Risks in the Framework Convention

The negotiations that produced the Framework Convention involved several key decisions regarding its breadth, enforcement, and legitimacy. First, the breadth of the treaty was ultimately constrained, as illustrated by two critical compromises on private-sector coverage and national security. For the former, the treaty adopted an opt-in mechanism under which each state party may decide whether, and how, to apply the treaty’s obligations to private companies developing or deploying AI systems. For the latter, national-security uses of AI are exempted from the treaty’s scope. While these compromises were politically necessary to secure broad participation, they leave notable gaps – precisely in areas involving the most rights-sensitive applications of AI.

Second, although the Framework Convention sets out obligations that national legislation and regulatory bodies must meet, it currently lacks a clearly defined compliance mechanism: for example, processes for independent monitoring or review of states’ regulatory (in)actions. The design of compliance and accountability procedures will fall to the CoP, but only once the treaty comes into force. Implementation will, in any case, depend heavily on domestic law and political will. However, without a well-designed CoP that includes both implementation reporting and credible mechanisms for verifying compliance with the treaty’s provisions, disparities between states’ commitments and practice are likely to emerge over time, potentially jeopardising the effectiveness of the treaty.

The global debate around AI is increasingly dominated by narratives of competitiveness and security.

Third, the treaty faces questions of democratic legitimacy. Although it enshrines principles of participation and transparency, its own negotiation process restricted meaningful participation. While non-state actors were able to comment on treaty drafts, they were excluded from the final drafting group by state negotiators. In addition, major AI powers such as China, India, and Russia – not to mention most of the Global South – are not members or observers in the CoE and could not participate in the negotiations. These absences risk undermining (perceptions of) the treaty’s legitimacy and universality, particularly outside Europe.

Finally, the Framework Convention faces an external political challenge. It is highly unlikely that the United States will ratify the treaty under the Trump Administration, even though it played a significant role in the negotiations and signed the treaty in September 2024 under the Biden Administration. Other states may take their cue from Washington and avoid the treaty altogether, further jeopardising its potential for wider global uptake.This highlights a broader trend: the global debate around AI is increasingly dominated by narratives of competitiveness and security. If left unchecked, these narratives could marginalise the Framework Convention’s rights-based framing, making it seem peripheral – or even outdated – in the emerging AI governance landscape.

EU Strategies to Anchor the Framework Convention

The successful negotiation of the Framework Convention represents an instance of effective multilateralism at a time when such agreements seem rare. For the EU, it offers not only guiding principles for its own AI regulatory efforts, but also a vehicle through which to project its digital regulatory philosophy on the global stage. Active EU engagement in promoting the Framework Convention would serve several strategic purposes. It would reinforce the EU’s normative leadership in AI governance, translating the “Brussels effect” into a cooperative model of global standard-setting rather than a unilateral export. It could also help align internal and external policies, ensuring that the EU’s regulatory power supports multilateral frameworks rather than competing with or duplicating them. Most importantly, sustained EU involvement could help prevent further fragmentation of AI governance into competing regional or geopolitical blocs.

By contrast, limited EU engagement would risk allowing the Framework Convention to stagnate and diminish into a largely symbolic declaration of rights rather than an instrument capable of shaping real-world practices. In such a scenario, the EU would forfeit a rare opportunity to help define global rules for one of the century’s most transformative technologies.

If the Framework Convention is to evolve into a durable global standard for AI regulation, EU engagement must go beyond symbolic endorsement. The following are some ways in which the EU could both solidify its leadership in AI governance and contribute to maximising the treaty’s global relevance.

  1. Lead by example through ratification and alignment. The EU signed the Framework Convention in September 2024. The next step is to ratify it (on behalf of all EU member states), ensure alignment with EU law, and encourage domestic institutions – such as the Fundamental Rights Agency and the AI Office – to incorporate the treaty’s principles into their work. This process is already underway. Swift ratification by both the EU and its member states would send a powerful signal of political commitment. More broadly, it would complete the EU’s construction of a practical, three-tiered model of AI regulation that others could emulate: (1) the Framework Convention’s principles as an overarching umbrella; (2) a risk-management framework like HUDERIA to guide legislation; and (3) implementation and compliance instruments such as the AI Act. This would demonstrate how European regulation can translate treaty principles into operational standards.
  2. Shape the Conference of the Parties. The CoP will determine whether the Framework Convention becomes a living mechanism or a symbolic gesture. Once a critical mass of ratifications is reached and the treaty enters into force, the first state parties will have the power to design how the CoP functions. In this respect, the EU’s leverage in shaping the Framework Convention’s future will be greatest if it is among the early ratifiers. These initial parties will need to navigate the tension between stringency and flexibility. High standards for implementation and reporting could improve compliance, but may scare away prospective signatories wary of burdens and scrutiny, thus limiting global uptake. Striking the right balance will be key. At a minimum, the EU should advocate for a CoP that is inclusive, transparent, accountable, and adaptive. This means creating structured channels for civil-society participation, publishing state compliance reports and CoP assessments, and introducing a review process to evaluate national implementation. Over time, the CoP could also be empowered to adopt additional protocols or interpretative guidance to strengthen the treaty’s provisions or address gaps arising from new AI technologies or uses.
  3. Invest in capacity-building and outreach. To avoid the perception that the Framework Convention is a purely European project, the EU must support its expansion beyond the continent. A diplomatic initiative aimed at promoting the treaty could position it as a cooperative alternative to power-driven technological competition. This could include articulating a strategic narrative that trustworthy AI is also competitive AI, and highlighting features such as regulatory sandboxes to show that rights-based governance and sustainable innovation can reinforce one another. In addition, the EU should maintain an active role in the CAI and in the successor body that will replace it when its mandate concludes at the end of 2025. This may involve targeted capacity-building through technical assistance, regional workshops, and model legislation to help non-European partners integrate the treaty’s principles into domestic law. Continued support for HUDERIA trainings would aid countries in assessing risks, while pilot projects could offer practical pathways for implementation. Leveraging existing EU external instruments, such as the Global Gateway and the Neighbourhood, Development and International Cooperation Instrument (NDICI), would reinforce these efforts. Further, the EU could support the CoE in securing new signatories, particularly from the Global South, thereby broadening the treaty’s legitimacy and uptake as well as enhancing its effectiveness over time.
  4. Align with EU external digital policy. The Framework Convention complements both the EU’s domestic AI regulatory project – the AI Act – as well as its broader efforts to promote trusted, rights-based digital governance. Integrating the treaty’s principles into global efforts such as the UN’s Global Digital Compact as well as into bilateral digital partnerships could help the EU advance international convergence while avoiding fragmentation across global digital-governance initiatives.

Through these measures, the EU can ensure that the Framework Convention functions not just as a European undertaking, but as an emerging global standard for aligning AI governance with human-rights principles and democratic values.

The CoE’s Framework Convention marks a decisive step forward in the global regulation of AI. For the EU, the Framework Convention aligns naturally with its rights-based and risk-oriented regulatory model. The treaty offers a multilateral extension of the EU’s internal governance vision, allowing the EU to project its standards globally through cooperation rather than unilateralism. The EU should seize the current window of opportunity to ratify the Framework Convention, invest in its mechanisms, and support the CoE in championing its global expansion. In doing so, the EU can help ensure that the first binding framework for AI governance becomes a lasting foundation for international cooperation – one that enables innovation while safeguarding rights and democratic values.

Citation recommendation: Hoxtell, Wade. 2025. “Anchoring Global AI Governance: How the EU Can Leverage the Council of Europe’s Framework Convention on Artificial Intelligence.” ENSURED Policy Brief, no. 7 (December): 1–9. https://www.ensuredeurope.eu

Photo: Adam Szabo / Unsplash
Read the full AI policy brief
No items found.