Operant AI reposted this
The first wave of "AI Security" products were sort of uninteresting to me, as they mostly revolved around employee spyware to see what kind of goofy questions mixed with company data employees were asking chat systems. I get why a CISO at big company might care about this use case, but it's never been the heart of what's interesting to me - protecting AI usage in applications, and giving security teams visibility into how it works.. This latest press release from Operant AI is what I'm excited about as far as where the AI security use case is going, and a great example of what's possible focusing on emerging runtime use cases.. AI is just another service/application - either an API or self hosted - and runtime application visibility via ADR approaches is going to offer organizations a lot of cool protections as these companies build around the use case. https://lnkd.in/ecrAjGbV
Considering the "input" points in this well presented schematic. LLM’s input embedding space is semantically sensitive. This makes LLMs vulnerable to small changes in how an input is framed or phrased. Bad actors would leverage this fragility.
Great take !
Great comments James Berthoty and very interesting.
Cybersecurity | Endpoint and Cloud Security | Threat Intelligence | Services and Incident Response
1moJust saw another post about AI security def a hot topic