Dennis Adolfi 👨🏼‍💻’s Post

View profile for Dennis Adolfi 👨🏼‍💻, graphic

Head of Tech @ Knowit Experience Sweden 🇸🇪 Passionately leading innovation and raising awareness around Sustainable Technology 🍀 and Responsible AI 💖

”Remember years and years ago when the web came out, and they put all those text boxes on the screen, we were told not to trust user inputs. And now the internet is a big giant text box and we are supposed to trust all those things.” https://lnkd.in/dK7GdbwM Mark Russinovich & Scott Hanselman explore the landscape of generative AI security and today's risks in LLM (Large Language Models), such as hallucination, indirect prompt injection, jailbreaks etc, and how we can mitigate these risks using the #Responsible #AI principles and Content Safety Filters in Azure AI at Microsoft #Ignite.

Scott and Mark learn responsible AI | BRK329

https://www.youtube.com/

Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

3w

The shift from isolated text boxes to a vast, interconnected web of user-generated content has indeed amplified the security challenges. LLMs, while powerful, are susceptible to vulnerabilities like hallucination, where they generate factually incorrect information, and indirect prompt injection, which allows attackers to manipulate their outputs through subtle wording. Mitigating these risks requires a multi-faceted approach, including robust content safety filters, adversarial training techniques to enhance model resilience, and ongoing research into explainability to better understand how LLMs arrive at their outputs. Given the potential for misuse, how do you envision incorporating human oversight into the development and deployment of LLMs to ensure ethical and responsible AI?

Like
Reply

To view or add a comment, sign in

Explore topics