Tea Mustać’s Post

View profile for Tea Mustać, graphic

CIPP/E • Artificial Intelligence • Data Protection • Privacy • IP Law

🧚♀️ PSA for the day: I don’t know who needs to hear this, but I have to get this out of my system. We’re now at the point where my eye twitches every time I see someone mentioning the so-called “limited risk” category in the AI Act. Let me say this loud and clear (mostly for the sake of my sanity): There. Is. No. Such. Thing. The term refers to AI systems that have additional transparency requirements outlined in Article 50. That’s it. These systems can be: 👉 High risk (like emotion recognition AI and biometric categorization systems), or 👉 Low risk (like interactive AI or generative AI—which will usually though not always be low risk). Point being: 💡 If the system is high-risk, those transparency requirements stack on top of all the other high-risk obligations in the AI Act. 💡 If the system is low risk, the transparency requirements still apply, but (lucky you!) most of the rest of the AI Act doesn’t. ✨ That’s it. That’s the whole deal. Thank you for coming to my rant. And if someone tries to convince you otherwise, don’t even think about sending them my way. My patience with this has hit its breaking point. 🙃

  • The AI Act Risk Pyramid - Tea Mustac
Shoshana Rosenberg

Chief AI Governance and Privacy Officer. Full Stack General Counsel. 💫Co-Founder of Women in AI Governance (WiAIG). MBA, JD, AIGA, AIGP, PLS, CIPP/E/A/US/C, CDPSE, FIP, IGP.

1w

Yes AND … the static risk matrix approach and the actual AIA static risk matrix suffers multiple logical fallacies. It is understandable that it was put in place as a stopgap -to stop the bleeding, but it is not going to hold and organizations and firms have to stop the madness of replicating the same mistake. The shift from static to dynamic is everything, here. It’s lovely to check the deployer box, the risk level box and move on- but it isn’t realistic or appropriate to the technology and the ways in which it and its uses will evolve, compound, have cumulative effects…

Aleksandr Tiulkanov

Upskilling people in the EU AI Act - link in profile | LL.M., CIPP/E, AI Governance Advisor, implementing ISO 42001, promoting AI Literacy

1w

Welcome to the club, I've been preaching this for ages now. My latest meltdown was a few months ago: https://www.linkedin.com/pulse/eu-ai-act-getting-basics-straight-aleksandr-tiulkanov-obe0e/

Kevin Schawinski

Co-Founder/CEO at Modulos | AI governance | Trustworthy & responsible AI | UniverseTBD | Oxford, Yale, ETH Zurich

1w

Most orgs are still in the denial phase of the five stages of grief. They assure themselves that all their AI is “limited risk”.

Tom Braegelmann

Rechtsanwalt / Attorney and Counsellor at Law (New York)

1w

what about no risk AI?

Tom Braegelmann

Rechtsanwalt / Attorney and Counsellor at Law (New York)

1w

It’s upside down

Jens Meijen

AI Governance, Risk & Compliance • PhD • Founder @ Umaniq & Ulysses AI

1w

Great visualization. Unfortunately, so many people have been using the term "limited risk" that it'll be really hard to unlearn... I usually call it "transparency risk" and show a venn diagram with the overlapping obligations, but your combination of the by now almost mythical risk pyramid with a (sort of) venn diagram works really well!

Christos (LL.M.) Makris

Techno-Privacy Legal Expert🔹Law & Tech @Tilburg🔹Contextual Doctrine Researcher🔹AI Governance (Should) Meet Data Protection🔹#Privacy_Matters🔹Advocate for Societal Innovation

1w

My eye twitches 😄 😄 😄 Limited liability people for limited risk AI systems. Tea, preserve your energy and fasten your seat belt: up until the 2nd of August 2026, all the AI governance 'gurus' will thrive. When the clock strikes midnight, they will transform straight into... pumpkins 😉

Thomas Ehmer

Breaking the ice of ignorance - Innovation Incubator Lead at Merck Group, 🎹 @ Ion Maiden

1w

yep, quite easy, the spirit of the act is to avoid exposure to danger. the higher the danger, the more restrictive. The "no AI risk" is if you just don't use AI - whatever "AI" means. This does by the way not mean that "no AI" cannot also be dangerous. 👋

Tijana Nikolić

AI Lead | Trusted AI | LLM Governance

1w

I hear you loud and clear Tea, thanks! I wonder how long it's going to take to actually put this into practice properly, it will take a bunch of trial and error for companies to get it.. I find posts like this really useful because they help with democratization of knowledge and adoption. 👏

What bothers me as well is this taxonomy doesn't apply a "medium". Quite often, risk management professionals apply a taxonomy of high, medium, low. Therefore, in this 'temperature scale', high is red and prohibited goes beyond the scale. It's more red than red.

See more comments

To view or add a comment, sign in

Explore topics