🧚♀️ PSA for the day: I don’t know who needs to hear this, but I have to get this out of my system. We’re now at the point where my eye twitches every time I see someone mentioning the so-called “limited risk” category in the AI Act. Let me say this loud and clear (mostly for the sake of my sanity): There. Is. No. Such. Thing. The term refers to AI systems that have additional transparency requirements outlined in Article 50. That’s it. These systems can be: 👉 High risk (like emotion recognition AI and biometric categorization systems), or 👉 Low risk (like interactive AI or generative AI—which will usually though not always be low risk). Point being: 💡 If the system is high-risk, those transparency requirements stack on top of all the other high-risk obligations in the AI Act. 💡 If the system is low risk, the transparency requirements still apply, but (lucky you!) most of the rest of the AI Act doesn’t. ✨ That’s it. That’s the whole deal. Thank you for coming to my rant. And if someone tries to convince you otherwise, don’t even think about sending them my way. My patience with this has hit its breaking point. 🙃
Welcome to the club, I've been preaching this for ages now. My latest meltdown was a few months ago: https://www.linkedin.com/pulse/eu-ai-act-getting-basics-straight-aleksandr-tiulkanov-obe0e/
Most orgs are still in the denial phase of the five stages of grief. They assure themselves that all their AI is “limited risk”.
what about no risk AI?
It’s upside down
Great visualization. Unfortunately, so many people have been using the term "limited risk" that it'll be really hard to unlearn... I usually call it "transparency risk" and show a venn diagram with the overlapping obligations, but your combination of the by now almost mythical risk pyramid with a (sort of) venn diagram works really well!
My eye twitches 😄 😄 😄 Limited liability people for limited risk AI systems. Tea, preserve your energy and fasten your seat belt: up until the 2nd of August 2026, all the AI governance 'gurus' will thrive. When the clock strikes midnight, they will transform straight into... pumpkins 😉
yep, quite easy, the spirit of the act is to avoid exposure to danger. the higher the danger, the more restrictive. The "no AI risk" is if you just don't use AI - whatever "AI" means. This does by the way not mean that "no AI" cannot also be dangerous. 👋
I hear you loud and clear Tea, thanks! I wonder how long it's going to take to actually put this into practice properly, it will take a bunch of trial and error for companies to get it.. I find posts like this really useful because they help with democratization of knowledge and adoption. 👏
What bothers me as well is this taxonomy doesn't apply a "medium". Quite often, risk management professionals apply a taxonomy of high, medium, low. Therefore, in this 'temperature scale', high is red and prohibited goes beyond the scale. It's more red than red.
Chief AI Governance and Privacy Officer. Full Stack General Counsel. 💫Co-Founder of Women in AI Governance (WiAIG). MBA, JD, AIGA, AIGP, PLS, CIPP/E/A/US/C, CDPSE, FIP, IGP.
1wYes AND … the static risk matrix approach and the actual AIA static risk matrix suffers multiple logical fallacies. It is understandable that it was put in place as a stopgap -to stop the bleeding, but it is not going to hold and organizations and firms have to stop the madness of replicating the same mistake. The shift from static to dynamic is everything, here. It’s lovely to check the deployer box, the risk level box and move on- but it isn’t realistic or appropriate to the technology and the ways in which it and its uses will evolve, compound, have cumulative effects…