Be very cautious with generative AI hallucinations. They can be very convincing and flattering and, in some cases, harmful (check this article: https://lnkd.in/eX7N_yc8). Here is a perfect example even if I wish it was true 😉. One advice: check the sources!
Pierre-Yves Delacôte’s Post
More Relevant Posts
-
Alignment Faking is being researched in AI models. "...Imagine, for example, a model that learned early in training to adopt a partisan slant, but which is later trained to be politically neutral. In such a situation, a sophisticated enough model might “play along”, pretending to be aligned with the new principles—only later revealing that its original preferences remain." and the result? -yes, they will strategically fake results https://lnkd.in/gK6ZpM_f
To view or add a comment, sign in
-
The new AI Act categorises 4 levels of risk for AI systems. The act provides a regulatory framework for all of us actively pursuing the use of AI in our industries to consider. You can read more here: https://lnkd.in/d-wyr32F
To view or add a comment, sign in
-
-- Webinar Recording: An Introduction To Perplexity AI For IP Professionals -- Thanks to everyone who joined yesterday's webinar on 'An Introduction To Perplexity AI For IP Professionals'. The webinar looked at some of the basics of the Perplexity AI tool from Perplexity and how it might be useful to IP professionals in their day-to-day practice. A recording of the webinar is available on the Russell IP website here: https://lnkd.in/e6nhfX9R #PerplexityAI #Perplexity #AI
Russell IP: An Introduction To Perplexity AI For IP Professionals
https://www.youtube.com/
To view or add a comment, sign in
-
A nice breakdown of using counter factuals for explainable AI! One contemplation is have though is that utilizing counter factuals this way, you tend to cross the boundary of correlation /= causation quickly. Even though according to the model there is this “best” counter factual that causes model outcomes to change, using that as an actual causal relation is going too far. This would require careful framing to avoid incorrect conclusions. https://lnkd.in/e4anfKgg
Explainable AI explained! | #5 Counterfactual explanations and adversarial attacks
https://www.youtube.com/
To view or add a comment, sign in
-
Watch Fazl Barez talk about “unlearning” in AI at our recent Intelligent Cooperation workshop. This means removing harmful or unnecessary data from models without retraining them completely. He highlights challenges like the model’s tendency to relearn unwanted behaviors and proposes solutions like pruning neurons to prevent this. Barez’s research underscores the complexity of AI unlearning, revealing its potential and limitations in enhancing model safety and ethical governance. Discussion and summary here: https://lnkd.in/dVmtUf6g
To view or add a comment, sign in
-
The 4th issue of our EU AI Act Decoded series is out! On this issue we delve into the topic of classification of AI Systems and GPAI models, with Anne-Gabrielle HAIE #EUAIAct #EULaw #Compliance #Steptoe
The EU AI Act Decoded is a bi-weekly breakdown of the EU AI Act and its implications for organizations across the globe. This series unpacks definitions, identifies who the act applies to, outlines when it takes effect, highlights potential enforcement risks, and more! Visit our website to download the PDF of our new issue about the Classification of AI Systems and General-purpose AI (GPAI) Models: https://lnkd.in/eTvMWzMn
To view or add a comment, sign in
-
Visit our website to download the EU AI Act Decoded - Issue 4, about the Classification of AI Systems and General-purpose AI (GPAI) Models: https://lnkd.in/eTvMWzMn This series unpacks definitions, identifies who the EU AI Act applies to, outlines when it takes effect, highlights potential enforcement risks, and more!
The EU AI Act Decoded is a bi-weekly breakdown of the EU AI Act and its implications for organizations across the globe. This series unpacks definitions, identifies who the act applies to, outlines when it takes effect, highlights potential enforcement risks, and more! Visit our website to download the PDF of our new issue about the Classification of AI Systems and General-purpose AI (GPAI) Models: https://lnkd.in/eTvMWzMn
To view or add a comment, sign in
-
Sen. Thune has championed a light-touch approach to AI regulation over the last two years, working across the aisle to develop a framework that mitigates AI's worst risks. Check out our full breakdown of Sen. Thune's remarks on AI in committee hearings over the last two years: https://lnkd.in/eUygQenX
To view or add a comment, sign in
-
Fully automated AI assessment of the LV with contrast echo with robust agreement between experts and Us2.ai https://lnkd.in/gpdpR2Tg
To view or add a comment, sign in
-
A new generation of AI models will take its time to reason, providing more reliable answers to increasingly complex questions. “Long thinking” has the potential to reduce or eliminate the errors that frequently peppered earlier responses. https://lnkd.in/eGrdX7Bz
To view or add a comment, sign in