”Remember years and years ago when the web came out, and they put all those text boxes on the screen, we were told not to trust user inputs. And now the internet is a big giant text box and we are supposed to trust all those things.” https://lnkd.in/dK7GdbwM Mark Russinovich & Scott Hanselman explore the landscape of generative AI security and today's risks in LLM (Large Language Models), such as hallucination, indirect prompt injection, jailbreaks etc, and how we can mitigate these risks using the #Responsible #AI principles and Content Safety Filters in Azure AI at Microsoft #Ignite.
Dennis Adolfi 👨🏼💻’s Post
More Relevant Posts
-
https://lnkd.in/gPdMRP6C BAIS issue in #Gemini Model of #Google has raised Sseries of controversies and the CEO has admitted the depiction of biased images of WorldWar II soldiers as racially biased. Data manipulation and inaccurate feeding of algorithms poses several socio- cultural and Ethical challenges in Generative AI segment. There is need for evolving sesitive training,&monitoring system in adaptive cross verification of date before storing in MLLM. Ethical practices in entire games of Generative AI must be top priority ofvall stakeholders. #generativeai #Google #Gemini #biases
To view or add a comment, sign in
-
This is probably one of the best talks about AI and development that I've heard so far. Scott Hanselman is the Vice President of Developer Community for Microsoft and he does an amazing job talking about the ethical use of AI here I would love to hear your thoughts on how or if you use AI in your development process in the comments #EthicalAI #DeveloperCommunity #FutureOfAI #AIinTech #ScottHanselman
AI: Superhero or super villain?
https://www.youtube.com/
To view or add a comment, sign in
-
An investigation by Wired and Proof News has uncovered that the transcripts of YouTube videos from notable figures like MrBeast and John Oliver, as well as content from the Wall Street Journal, have been scraped to train AI models. Major companies such as Anthropic, Nvidia, Apple, and Salesforce have utilized these datasets. The dataset in question comprises transcripts from over 173,000 YouTube videos spanning more than 48,000 different channels. This revelation raises important questions about data usage and the ethics of scraping online content for AI training purposes. #AI #DataEthics #YouTube #TechNews #ArtificialIntelligence ---------------------- Learn more here: https://lnkd.in/e8rQTDQV
Over 100k YouTube videos have been scraped to train AI | TechCrunch Minute
https://www.youtube.com/
To view or add a comment, sign in
-
Google's Gemma represents a significant advancement in AI technology, emphasizing ease of use, ethical development, and broad compatibility with existing AI frameworks. This introduction showcases Gemma's unique position in the AI community, offering pre-trained models and a Responsible Generative AI Toolkit to ensure safe, effective applications. The launch of Gemma underscores Google's commitment to democratizing AI, ensuring that developers and researchers have access to cutting-edge tools for responsible innovation. #google #ai
Google Introduces GEMMA and Changes the AI Game Forever!
https://www.youtube.com/
To view or add a comment, sign in
-
Following an interview with OpenAI's CTO, Mira Murati, by the Wall Street Journal, where she discussed the new AI video tool Sora and how the company intends to roll it out, there's been a growing concern. The dialogue has shifted towards questioning if the existing model of self-regulation and internal controls at AI firms adequately protects ethical norms and serves the public's best interest. Your thoughts? How should we navigate the balance between innovation and oversight in the era of AI? #AI #EthicsInTech #AIgovernance
In a recent deep dive by the Wall Street Journal, OpenAI's CTO, Mira Murati, shared insights into their new AI video tool, Sora, and the company's rollout plans. However, since the publication of this article, there's been a significant buzz concerning the oversight and governance of AI development, sparking a vital conversation among tech enthusiasts and critics alike. The interview highlighted some pressing issues. While the advancements in AI, like Sora, are indeed groundbreaking, they also underscore the urgent need for robust governance structures and clear oversight mechanisms in the rapidly evolving AI landscape. Many are now questioning whether the current state of self-regulation and internal checks within AI companies is sufficient to safeguard ethical standards and public interest. The growing power of AI tools, with their far-reaching implications on privacy, security, and societal norms, calls for a broader dialogue on how these innovations should be guided and monitored. The discussion around Sora is not just about technological capabilities but also about the responsibilities of AI developers to ensure their creations are safe, ethical, and align with societal values. It's a reminder that as we marvel at AI's potential, we must also commit to the principles of accountability and transparency in its development. Your thoughts? How should we navigate the balance between innovation and oversight in the era of AI? #AI #EthicsInTech #AIgovernance
OpenAI's Sora Made Me Crazy AI Videos—Then the CTO Answered (Most of) My Questions | WSJ
https://www.youtube.com/
To view or add a comment, sign in
-
Some great points to remember about AI
AI is a Lie - Cutting Through the Hype
https://www.youtube.com/
To view or add a comment, sign in
-
I’m happy to share that I’ve obtained a new certification: Google AI Essentials! Taking a step in the new era of co-living with technology, the knowledge of Prompt Engineering, LLM's we get to know "us" "the better". This journey of knowing was very fascinating and captivating towards AI! #GoogleAI#AI
To view or add a comment, sign in
-
In a recent deep dive by the Wall Street Journal, OpenAI's CTO, Mira Murati, shared insights into their new AI video tool, Sora, and the company's rollout plans. However, since the publication of this article, there's been a significant buzz concerning the oversight and governance of AI development, sparking a vital conversation among tech enthusiasts and critics alike. The interview highlighted some pressing issues. While the advancements in AI, like Sora, are indeed groundbreaking, they also underscore the urgent need for robust governance structures and clear oversight mechanisms in the rapidly evolving AI landscape. Many are now questioning whether the current state of self-regulation and internal checks within AI companies is sufficient to safeguard ethical standards and public interest. The growing power of AI tools, with their far-reaching implications on privacy, security, and societal norms, calls for a broader dialogue on how these innovations should be guided and monitored. The discussion around Sora is not just about technological capabilities but also about the responsibilities of AI developers to ensure their creations are safe, ethical, and align with societal values. It's a reminder that as we marvel at AI's potential, we must also commit to the principles of accountability and transparency in its development. Your thoughts? How should we navigate the balance between innovation and oversight in the era of AI? #AI #EthicsInTech #AIgovernance
OpenAI's Sora Made Me Crazy AI Videos—Then the CTO Answered (Most of) My Questions | WSJ
https://www.youtube.com/
To view or add a comment, sign in
-
With the push towards AI, I am curious as to if we will see the regulation of virtual worlds. #Tech #Systems #AI #Networks #Bigdata #Algorithms
To view or add a comment, sign in
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
3wThe shift from isolated text boxes to a vast, interconnected web of user-generated content has indeed amplified the security challenges. LLMs, while powerful, are susceptible to vulnerabilities like hallucination, where they generate factually incorrect information, and indirect prompt injection, which allows attackers to manipulate their outputs through subtle wording. Mitigating these risks requires a multi-faceted approach, including robust content safety filters, adversarial training techniques to enhance model resilience, and ongoing research into explainability to better understand how LLMs arrive at their outputs. Given the potential for misuse, how do you envision incorporating human oversight into the development and deployment of LLMs to ensure ethical and responsible AI?