Written by 12:59 Events, IAPP, IAPP, Media

IAPP Global AI Governance Law and Policy: EU

IAPP Global AI Governance Law and Policy: EU

By Vincenzo Tiani – IAPP Brussels KNet Co-Chair

The EU has been regulating the digital sphere since the early 2000s through legislation on fundamental and other rights such as data protection and intellectual property; infrastructure through security, public procurement and resilience; technology and software such as RFID, cloud computing and cybersecurity; and data-focused legislation, including data access, data sharing and data governance. The European Commission “is determined to make this Europe’s ‘Digital Decade‘,” with regulation a core component to that ambition.

In 2018, the European Commission set out its vision for AI around three pillars: investment, socioeconomic changes and an appropriate ethical and legal framework to strengthen European values. The Commission established a High-Level Expert Group on AI of 51 members from civil society, industry and academia to provide advice on its AI strategy.

In April 2019, the HLEG published its ethics guidelines for trustworthy AI, which put forward a human-centric approach on AI and identified seven key requirements that AI systems should meet to be considered trustworthy.

When European Commission President Ursula von der Leyen took office in December 2019, she pledged to “put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence” in her first 100 days. In press remarks from February 2020, she mentioned AI’s potential to improve Europeans’ daily lives and its role in reaching Europe’s climate neutrality goals by 2050. She also set a clear objective of attracting more than 20 billion euros per year for the next decade to defend Europe’s position on AI.

That announcement coincided with a Commission white paper that set out the policy options for achieving an approach that promotes the uptake of AI while also addressing the risks associated with certain uses of AI.

The AI Act, first proposed by the European Commission in April 2021, was then drafted, negotiated and amended fiercely by the Commission, Parliament and Council. The agreed text will soon enter into force, combining a human-centric philosophy with a product safety approach. The AI Act will be a keystone regulation for the development and deployment of AI in the EU and around the world. The requirements set forth in the act, combined with those that will follow from further guidance and implementation, plus the complex intersections of the act itself with the EU’s broader digital governance regulatory framework, make for a deep, dynamic and exacting regulatory ecosystem for AI governance in the EU.

Regulatory approach

The AI Act is a regulation, meaning it is directly applicable in all EU member states, that seeks to guarantee and harmonize rules on AI. Compared to the EU General Data Protection Regulation, which was created to protect individuals’ privacy and data protection rights, the initial proposal for an AI Act was born in the context of product safety, focusing on ensuring AI products and services on the EU market are safe. This manifested in proposed principles and requirements that are well established in the product safety context, such as technical specifications, market monitoring and conformity assessments. Many of the AI Act’s now-final requirements that also protect individual rights originate from the European Parliament’s positions and proposals during the trilogue negotiations with the European Commission and Council.

The AI Act is framed around four risk categories of AI systems. Each category prescribes various risk-based measures that relevant actors in the AI life cycle should take and implement. During the trilogue negotiations on the draft AI Act, requirements were added for general-purpose AI, effectively making it an additional fifth category that, importantly, does not preclude the application of requirements attaching to other risk-based categories. For example, a general-purpose AI system might also fall within the category of high risk.


Keep reading on IAPP