Some personal reflections on the EU AI Act: a bittersweet ending
In total, 1004 days after the European Commission has presented the #AIAct on Wednesday, 21 April 2021, the last technical meeting concluded the legislative negotiations on Friday, 19 January 2024. While the European Parliament’s report took 43 Technical and 12 Shadows meetings to be finalized, the inter-institutional discussions were concluded after 35 Technical Trilogue meetings, 7 Shadows meetings, and 6 Political Trilogues. A surprising set of numbers for contemporary Brussels that became more and more efficient over the past years in adopting new laws at high speed.
Was the lengthy process the result of a complex and dynamic topic? Of the diligence and engagement of the policy-makers? Of the number of ‘special’ characters and controversies? Or of systemic and structural problems along the EU policy cycle? In the end, it was probably a bit of everything. The disregard for #betterregulation principles was for me the most shocking factor but I will talk more about that topic in an upcoming essay to be published later this year. Despite the bumpy ride, the leading IMCO/LIBE committees have approved on Tuesday morning the results of the Trilogue negotiations with an overwhelming majority of +71 o7 -8. A happy ending? Well, Axel Voss and I are not so sure.
The victories
Yes – there are lots of positives. The #AIAct is much more future-proof than other digital rulebooks of the EU and it is trying out new promising ways of law-making. Its final version strikes a good balance between those that are concerned about the possibilities that AI offers and those that want to make better use of it. Some of our personal highlights and wins ... we have:
Ensured international alignment and coordination along the legislative process, in particular with our friends from the OECD.AI secretariat in #Paris.
Added exemptions for doing scientific research with AI, for the entire development process and for the open-source sector in Article 2.
Transformed the rigid high-risk obligations via Article 8 into flexible principles that take the context of the deployment of an AI system into account and only require what is technically feasible. The concrete obligations in Article 9 - 15 have also been heavily improved.
Included an obligation that will accelerate information sharing along the AI value chain between different market actors with Article 28 in order to enable downstream providers and deployers to become compliant with the AI Act.
Worked on new ways of disclosing and tracing artificially generated content as well as informing end-users that they are facing an AI chatbot (Article 52).
Pushed through an innovation-friendly two-tiered approach that focuses on specific risks posed by systemic foundation models (Article 52a ff.) and that makes sure that downstream providers that integrate or use foundation models better understand them and get all the information needed.
Promoted private-public partnerships via regulatory sandboxes (Article 53 ff.) and institutionalized regular exchanges with stakeholder throughout the text to facilitate a more inclusive and evidence-based way to implement the law.
Secured harmonized standards (Article 40) and guidelines (Article 82a) in the text as a means to make compliance with the AI Act easier and cheaper, while also allow to specify the law outside the ponderous legislative process.
Proposed already in our JURI committee INL to combine Delegated Acts with specific Articles as well as Annexes in order to allow the European Commission to swiftly adjust the text and enable the law to keep pace with the technological advance.
Reducing red tape throughout the text by removing in particular those parts that would have created parallel obligations with regard to already existing procedures (i.e. sectorial legislation in the medical or finance sector).
The defeats
It’s an impressive list and I could go on for some time. It is my strong belief that the European Parliament has heavily improved the #AIAct by discussing each and every detail of the law and by adding many great ideas. However, the EU AI Act and the way how it was negotiated has also many shortcomings. In fact, the law is already being used by the Parliament's administration as prime example to push internally for reforms and to call for substantial changes in the way the EU is producing laws. I could write many pages and bring up many examples that underline how much of a risk that is for our European project but let us focus in this article on the content of the EU AI Act. In Axel's and my internal assessment, we have found 32 pros and 32 cons. Our negative findings can be categorised into three groups:
Firstly, the AI Act is in our opinion conceptually not fit to regulate AI. We pointed out from the beginning that mixing product safety and fundamental rights as well as using NLF concepts such as ‘substantial modification’ is not working for evolving AI systems. This kind of technology is not comparable with a vacuum or a toy of my son Leo. Moreover, the protection of fundamental rights is something that the NLF system does not have any experience with. There was however no willingness to make conceptual changes even though the resulting legal problems of this conceptual choice became more than evident during the negotiations.
Secondly, the final text is not fulfilling the law's key objective of providing legal certainty or an ecosystem of trust as the European Commission calls it. On the contrary, most definitions in Article 3 are vague, the procedures are incomplete and legally questionable (i.e. designation of systemic GPAI models), several parts lack empiric evidence (i.e. why was there never an update of the CEPS (Centre for European Policy Studies) study that was annexed to the IA?), there are legal overlaps with other laws (i.e. GDPR, DSA, PWD, MDR), and - according to the Legal Services of all three EU Institutions - the quality of the legal drafting is not up to EU standards. The Court of Justice of the European Union will get a lot of work ...
Thirdly, the AI Act is creating an overcomplicated governance system. Its mix of NLF and non-NLF authorities on EU and national level will lead to numerous power struggles. Everyone was afraid to interfere even slightly with the national freedoms in how to organise the AI Act's governance system. As a result, Member States will designate very different national competent authorities, which will - despite the Union Safeguard Procedure in Article 66 (remember: the AI Act is not a simple NLF law) - lead to very different interpretations and enforcement activities. For instance, the cybersecurity authority that Member State A will put in charge will have naturally a very different perspective on AI than the data protection authority of Member State B. And do not forget that there are many more authorities (i.e. ECB, ENISA) that want to have a say on how to deal with AI and are maybe not very willing to compromise.
Combined, those three issues could significantly raise compliance costs for providers and deployers of AI. Especially SMEs and start-ups from the EU might find it in the end too risky to develop or deploy AI … or they are forced to draw on expensive third-party auditing and certification shemes in order to prevent heavy fines. If this scenario becomes reality, the EU will certainly not turn into a global leader in AI. On the contrary, the AI Act would backfire strongly and would further increase the EU’s digital backwardness.
An optimistic outlook
The good news is that this scenario does not need to happen. There is still time and there are several ways to prevent an escalating legal uncertainty when the AI Act gradually becomes applicable between 2025-27. When we are currently giving interviews, holding keynotes, joining panels or doing podcasts, Axel Voss and I are calling up civil society, academics and industry to take action. There is no time for celebration, especially since the AI Act is far from perfect. What the EU needs right now is a collective effort of all involved public and private actors! We need your help to improve the law and - together - swiftly build up a workable EU AI governance framework. Four actions by stakeholders seem to be key:
Engage in CEN and CENELEC and the national bodies such as DIN Deutsches Institut für Normung e. V. Our friend Sebastian Hallensleben needs your contributions to make sure that the new set of harmonised technical standards is available in time and is adequate for the many different sectors and use cases of AI systems.
Make use of the many new forms of private-public partnerships in the EU AI Act and in particular, apply for Regulatory Sandboxes. In those you can enter into a close dialogue with national competent authorities and improve - even test under real-world-conditions - your AI systems. I strongly believe that if those places are adequately financed by Member States and heavily used by companies, the public and private sector will both significantly benefit from the joint regulatory learning. The first feedback from #Spain's AI-Sandbox sounds very promising!
Share your experience and expertise with #Brussels! Send your AI use cases to the European Commission as they are starting to draft the guidelines that we have tasked them to provide in Article 6 and 82a. Show them what technologies should be considered as an 'AI system' or as 'high-risk' and which not. Your contributions are also key for meaningful Implementing and Delegated Acts. Do not let this opportunity pass!
Identify and motivate technical experts to work for the AI Office, national competent authorities, market surveillance authorities, and notified authorities as well as notified bodies. The effectiveness of the Scientific Panel and Advisory Forum also depends on the membership of many leading GPAI model and AI experts. Although they could earn much more at large Tech companies, they might be attracted by the chance to make a difference. Maybe you could do some convincing?
🚀 Let’s make it work! 🚀
#ClosingCredits: Last but not least, since the legislative AI Act journey is over now, I want to thank all those people that have supported me in the last years in our quest to create a balanced and future-proof law:
My wife @Maria Cruz Zenner (She/Her) and our son Leo for their support, patience and acceptance!
My boss Axel Voss for his trust and my colleague Greta Koch for covering for me & accepting my away times!
Our EPP colleagues Eva Maydell (Paunova), Deirdre Clune MEP, Torlach Grant and Slavina Ancheva, Giedre Svetikaite and David Nosák for the best EPP Group cooperation that I have experienced in my past 7 years in the European Parliament.
Our friends from RE IMCO, Greens and ECR - in particular David Kordon, Simona de Heer, Sebastian Raible, Arnika Zinke, Rafał Kamiński, and Filip Swiderski. It was a pleasure to work with you and it is nice to see that we can still have such a high level of trust and loyalty among political groups in these polarised times.
The involved persons from the EP and CSL administration for their technical support. And all those like-minded persons in the Commission and Council that were pushing back when we were facing huge time pressure and truly cared about creating the best law possible. Special shout-out to Miguel Valle del Olmo and Carlos Romero Duplá.
And all the 100s of stakeholders from civil society, academics, industry, and third countries that have met me and supported me over the last years! Without your excellent input, we would not have achieved 10% of what we have gotten in the end. THANK YOU!
MPP Candidate, Harvard Kennedy School | Fulbright Scholar | Belfer Young Leader
8moThanks for the good work together and many shared moments over the course of this file! What a journey it has been. Good outline of the steps ahead. :)
Management Consultant | Change | Impact | EU Policy
8moExcellent post!
Kai Zenner Thank you for sharing. We are building a living repository for legal and policy materials relating to #airegulation and will add a link to our collection, so that practitioners can still find this contribution even after it disappears from the timeline. We would be excited if you’d follow us on ClearCutAnswers. #legalpublishing #humanintelligence #AIAct clearcutanswers.com
Legal expert - Author of several legal contributions - Let's meet to unfold your opportunities #GDPR #blockchain #IP #AI
9moI just finished to read the entire text (version dd. end of January 2024). I have A LOT of questions. The text is full of vague and obscure expressions. Sometimes, we have the impressions that parts of the text are still in draft mode. And it lacks the reply to one of the most important question: what if a General Purpose AI is used in a high risk system? Do the deployer (or the provider?) have tu fulfill obligations related to the GPAI AND to the high risks system? Or only one set of obligations (and if so which one)?