AI is everywhere - We read and hear about the major breakthroughs in AI almost everyday but how often have we heard about the false exaggerated claims made about AI or wondered if some AI genuinely works. With AI’s growing prevalence, it’s easy to feel overwhelmed by its capabilities and claims. That’s why I decided to explore what AI really is, what it can do, and most importantly, what it can’t do by reading the book “AI Snake Oil” by Arvind Narayanan and Sayash Kapoor. The book also dives into the role of tech media in exposing false claims about AI and questions how we can differentiate genuinely useful AI tools from those that are just "snake oil." But why the term ‘snake oil’? In the late 19th and early 20th centuries, "snake oil" salesmen exploited people’s unscientific beliefs, selling oil from snakes as remedies that claimed to have various health benefits. Just like that, "AI snake oil" refers to technologies that make bold claims but fail to deliver or work in an unbiased, effective manner. This book is a must-read for anyone who: ⚫️ Is curious about whether AI tools actually live up to their claims, ⚫️ Needs to make informed decisions on using AI in the workplace, or ⚫️ Wants to understand and address the potential harms of AI. They even have a website that posts about AI, you can check it out here - https://lnkd.in/g4Eh-NTM This book is an eye-opener and a great guide for navigating the complex world of AI. Have you come across any examples of AI that seemed like "snake oil"? I’d love to hear your thoughts in the comments.
Navya Govil’s Post
More Relevant Posts
-
Correct me if I'm wrong, but we don't need AI. I mean it, correct me. As a designer, product developer, writer, musician and content creator, I have a lot to do with AI. Most of the time it is just for inspiration and curiosity about what is possible. Basically, these are all harmless games. But every day I am amazed to read new news about what AI is now being used for. - In the perfume industry, to create new fragrances. - In beekeeping, to monitor the health and productivity of hives - In medical diagnostics, for the early detection of diseases Given the discussion about the dangers of artificial intelligence (keyword: AGI), I ask myself: Is this necessary? And by that, I mean the absolute necessity of AI for the evolution of our species. I am not questioning the usefulness of more efficient and faster processes. When I think of the fact that my children could be helped more quickly in hospital with the help of AI, I find myself on the side of the AI advocates. Nevertheless, I wonder whether this is necessary, or whether all the things that AI is currently being used and prepared to do could also be achieved by reforming existing systems. What if time travellers prevented the advent of AI? Would we really miss it? I don't think we need AI. But maybe I am wrong.
To view or add a comment, sign in
-
AI is weird, and for a (shrinking) number of tasks, it's worse than unhelpful. Boundaries of today's models are unclear, unintuitive (write a poem 👍, do basic math 👎) and they keep changing rapidly (new model updates every other week) anyway. Perhaps, that partially explains the disconnect between fast pace for progress at our consumer experience vs much slower adoption at work which often has a lower tolerance to uncertainty. A great research by HBS and BCG, at which they've observed the impact of AI at performance (quality, speed) of hundreds (n:758, 7% of BCG) of consultants, for tasks inside the frontier and the ones outside. They call it the "Jagged Frontier". For the vast majority of tasks, AI shifts the performance distribution positively (40% improvement in quality, at 25% faster time) but for the tasks outside of AI frontier, consultants using AI were 19 percentage points *less likely* to produce correct solutions compared to those without AI. https://lnkd.in/e6t57XsS
To view or add a comment, sign in
-
Artificial intelligence is shaping our future at an unprecedented pace, and staying informed is vital for success. Recently, I've been delving into top AI news publications that provide comprehensive insights into AI trends, innovations, and applications across various industries. Publications like Analytics Insight and KDnuggets offer valuable resources ranging from daily news updates to deep analytical insights. Their expert opinions enrich our understanding of AI's transformative impact. In my exploration, AI Magazine and AI Business stood out for their strategic analyses that empower leaders in making informed decisions on AI adoption. Meanwhile, the developer-centric approach of DevX.com bridges the gap between complex AI concepts and practical implementation. Following such resources not only keeps us updated but also enables us to leverage AI effectively in our professional endeavors. What are some AI sources you trust? Let’s exchange our insights and discuss how AI is influencing your industry landscape. For those interested in diving deeper, you can explore the original article here: [DMNews Article on AI Publications](https://lnkd.in/gPsZ6zn2
To view or add a comment, sign in
-
Be careful how you frame things... Both LLMs and people can be significantly influenced by the way information is presented or questions are framed. This "framing effect" can shape responses, decisions, and interpretations in subtle but powerful ways. For AI: The phrasing of prompts can dramatically alter an AI's output, sometimes leading to inconsistent or biased results. Always consider how your queries might be steering the AI's response. For humans: Our cognitive biases make us susceptible to framing effects in daily life, from marketing tactics to political messaging. Being aware of this can help us make more objective decisions. Key takeaways: Recognize the power of framing in all communications. When using AI tools, experiment with different phrasings to get a fuller picture. In human interactions, try reframing issues to gain new perspectives. Practice critical thinking to look beyond the frame and examine core facts. By understanding framing effects, we can communicate more effectively, make better decisions, and use AI tools more wisely. How do you account for framing in your work and interactions?
To view or add a comment, sign in
-
🎓🪄 I have new research for you about who’s the Real MVP in AI and Humans Collab So, I’ve been deep into building an AI tool to help my team handle the ridiculous amount of info we get every day through comms tools. And while digging into the potential risks and caveats of using AI, I stumbled on a really interesting research, "When combinations of humans and AI are useful: A systematic review and meta-analysis." Long story short, the article confirmed a lot of what we already know. AI is great for repetitive, data-heavy tasks—think fraud detection or large-scale analysis. In one example, AI hit 73% accuracy versus 55% for humans. 😱 But here’s where it gets interesting: when you combine humans and AI for these types of tasks, performance actually dropped to 69%. So, sometimes, we just need to let AI do its thing without human interference. On the flip side, humans outperform AI when tasks need creativity, intuition, or more complex decision-making. For example, in a classification task, humans alone scored 81%, but when AI was added into the mix, that jumped to 90%. This shows that humans are pretty good at knowing when to trust their own judgment, and AI can step in to enhance that. 🤘 The key takeaway: If humans are already good at a task, adding AI can boost performance. But if AI is better on its own, bringing humans into the mix can actually lower effectiveness. Now, the downside of this article is that it only covers research up to June 2023. We all know AI has evolved massively since then, and it’s probably capable of doing more now than what’s described in the article. 📄 The research is a systematic review and meta-analysis of 106 experimental studies published between January 2020 and June 2023, analyzing the performance of humans, AI, and human-AI combinations across 370 effect sizes to understand when collaboration outperforms individual efforts. Feel free to access it here: https://lnkd.in/dgF73Hxs
To view or add a comment, sign in
-
🔍 𝐀𝐈 𝐂𝐚𝐧 𝐄𝐥𝐞𝐯𝐚𝐭𝐞 𝐎𝐮𝐫 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 𝐨𝐟 𝐭𝐡𝐞 𝐖𝐨𝐫𝐥𝐝—𝐎𝐫 𝐑𝐞𝐝𝐮𝐜𝐞 𝐈𝐭 𝐭𝐨 𝐃𝐚𝐭𝐚 𝐏𝐨𝐢𝐧𝐭𝐬 𝐚𝐧𝐝 𝐀𝐥𝐠𝐨𝐫𝐢𝐭𝐡𝐦𝐬 AI is transforming how we perceive and interact with the world, offering new insights and deeper understanding. But as we increasingly rely on AI, we must ask ourselves: 🧠 𝐀𝐫𝐞 𝐰𝐞 𝐠𝐚𝐢𝐧𝐢𝐧𝐠 𝐰𝐢𝐬𝐝𝐨𝐦 𝐨𝐫 𝐣𝐮𝐬𝐭 𝐦𝐨𝐫𝐞 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧? While AI can process vast amounts of data, how do we ensure that it helps us see the bigger picture, rather than just breaking the world down into data points? ⚖️ 𝐂𝐚𝐧 𝐀𝐈 𝐜𝐚𝐩𝐭𝐮𝐫𝐞 𝐭𝐡𝐞 𝐜𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐢𝐞𝐬 𝐨𝐟 𝐡𝐮𝐦𝐚𝐧 𝐞𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞? As AI models attempt to understand and predict human behavior, are we at risk of oversimplifying the rich, nuanced nature of life into algorithms? 🌍 𝐖𝐡𝐚𝐭 𝐡𝐚𝐩𝐩𝐞𝐧𝐬 𝐭𝐨 𝐭𝐡𝐞 𝐡𝐮𝐦𝐚𝐧 𝐭𝐨𝐮𝐜𝐡? In a world increasingly driven by AI, how do we preserve the empathy, intuition, and creativity that make our understanding of the world uniquely human? AI has the power to deepen our understanding, but it also risks flattening the complexities of life into mere numbers. The challenge lies in using AI to enhance our wisdom, not replace it with cold calculations. 💬 𝐋𝐞𝐭’𝐬 𝐝𝐢𝐬𝐜𝐮𝐬𝐬: How do you see AI shaping our understanding of the world? Can it truly elevate our perception, or are we at risk of reducing everything to data points? hashtag#ArtificialIntelligence hashtag#Understanding hashtag#HumanExperience hashtag#TechEthics hashtag#Innovation
To view or add a comment, sign in
-
𝐖𝐡𝐚𝐭 𝐢𝐬 𝐄𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐥𝐞 𝐀𝐈 (𝐗𝐀𝐈)? - It's like a guide that helps people understand why AI algorithms make certain decisions. - It makes AI less like a mystery and more like something we can trust and rely on. 🤔 𝐖𝐡𝐲 𝐃𝐨 𝐖𝐞 𝐍𝐞𝐞𝐝 𝐗𝐀𝐈? - Without XAI, AI can be risky and hard to trust. - XAI helps reduce these risks and makes AI decisions clearer. - It also helps us make better decisions based on AI results. ✔️ 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬 of XAI: - Reduces risks and liabilities: By showing how AI works, it helps avoid mistakes and legal issues. - Increases trust and acceptance: When people understand AI, they're more likely to trust it and use it. - Improves decision-making: Knowing why AI makes certain decisions helps us make smarter choices. ❌ 𝐂𝐮𝐫𝐫𝐞𝐧𝐭 𝐋𝐢𝐦𝐢𝐭𝐚𝐭𝐢𝐨𝐧𝐬: - Computational complexity: Sometimes XAI can be slow or complex, making it hard to explain AI decisions in real-time. - Limited scope: XAI may only work well in certain areas and needs more research to be useful everywhere. - Lack of standardization: There's no set way for XAI to work across different systems, making it harder to use in different places. 𝐂𝐨𝐦𝐩𝐚𝐧𝐢𝐞𝐬 𝐔𝐬𝐢𝐧𝐠 𝐗𝐀𝐈: - Healthcare providers: They use XAI to understand medical data better, like reading scans. - Financial institutions: XAI helps them catch fraud by explaining why certain transactions might be suspicious. - Many other industries are also starting to use XAI to make better decisions and build trust in AI systems. ♻ 𝐑𝐞𝐩𝐨𝐬𝐭 𝐢𝐟 𝐲𝐨𝐮 𝐟𝐢𝐧𝐝 𝐢𝐭 𝐯𝐚𝐥𝐮𝐚𝐛𝐥𝐞! 👉 Follow Isha Rani ( https://lnkd.in/gq4ph2kP ) for more insights and Hit 🔔 on my profile to get notifications for all my new posts. #machinelearning #ai #artificialintelligence #data #datascience
To view or add a comment, sign in
-
AI hallucinations are driven by AI bias, and there are certain clear ways that bias happens. Here are perhaps the most obvious ones that I at least have seen a number of times. - In a lengthy text, contextual information and internal logic often goes missing, leading to absurd results. - Quantity of information, and describing things in more words than necessary is sometimes seen as an indicator of quality. This applies also to beautifully written, evocative text - and may be one reason why they inherently often see AI-produced text more credible than human-produced text. - Overgeneralisation is a serious issue. Just because a situation in a business fits a certain reality 80% of the time, doesn't mean that the same advice makes sense in every time. - AI might evaluate logic without evaluating factual evidence, thereby arriving at absurd results. - Expert opinions often go unscrutinised, even given though they might be wrong. What's occasionally worse, non-existent experts still occasionally pop up in AI evaluations. And last but definitely not least, AIs are too often distracted by the user's insistence that they're wrong, or simply questioning the validity of their results - especially when the issue is not something that is empirically true or false. AIs are getting better, slowly (in today's standards anyway), but they'll never be perfect in this sense. What does this mean for us in practice? Here's my top four: 1. Ask specific, clear, factual questions and simple language. 2. Break big questions down into sub-questions and ask them in order. 3. Ask for evidence and sources in the form of website links. 4. Avoid hypotheticals (unless it's spitballing you want - AI is great for that, but you need to realise that's what you're doing then) Once again, it's a great tool. But make sure you don't let it do the thinking for you, and don't let anybody else do so either if you can help it.
To view or add a comment, sign in
-
The role of AI in operational efficiency: Beyond the silver bullet AI has transformational power, but for most enterprises, focusing on operational efficiency rather than miracles, is far more valuable. Credit: Gorodenkoff / Shutterstock Artificial Intelligence (AI) has earned a reputation as a silver bullet solution to a myriad of modern business challenges across industries. From improving diagnostic care, to revolutionizing the customer experience, there are many industries and organizations that have experienced the true transformational power of AI. However, that’s not the case for the masses. And organizations that view AI as a fix-all are missing a huge opportunity—and are also likely to encounter significant challenges. When AI is applied in a way that overemphasizes its strengths and downplays its weaknesses, that’s when we run into problems. Utilizing our data engineers and scientists, we have been guiding our clients on their desired paths to ensure they meet their organizational goals. Please email me at [email protected] to further this conversation. Full article below https://lnkd.in/gTd3KkUw
To view or add a comment, sign in
-
🚨 Warning: The Future of AI Control? 🚨 Did you know that the combined expertise of Ilya Sutskever, Greg Brockman, Sam Altman, and Elon Musk is sounding the alarm about Google DeepMind's Demis Hassabis' work? 🤖💥 As pioneers in AI research, they're warning us about the potential risks of creating Artificial General Intelligence (AGI) that could become uncontrollable or even lead to an "AGI dictatorship." This refers to the possibility of an AGI system surpassing human control and decision-making capabilities, potentially leading to catastrophic consequences. Here are 3 actionable insights to consider: • Value Alignment is Key: Demis Hassabis' work at Google DeepMind has already achieved significant breakthroughs in complex tasks like Go and protein folding. However, concerns about the safety and control of these systems are growing. To mitigate this risk, we need to develop frameworks that align AGI goals with human values. • Control Protocols Must be Robust: The latest trends in AI research and development focus on creating more sophisticated and efficient AGI systems. But without robust control protocols, these systems can become unmanageable. We need to implement safety measures that prioritize human oversight and intervention. • Explainability is Crucial: As AGI systems become increasingly powerful, it's essential to understand how they make decisions. Techniques like model interpretability are necessary to ensure transparency and trust in AGI decision-making processes. Actionable Tip: Start by implementing explainability techniques in your current AI projects. This will help you better understand the inner workings of your models and identify potential risks before they become major issues. What do you think? Can we prevent an "AGI dictatorship" from occurring, or are we already too far down this path? Share your thoughts in the comments below! 💬 #AI #ArtificialIntelligence
To view or add a comment, sign in
I help Academia & Corporates through AI-powered Learning & Growth | Facilitator - Active Learning | Development & Performance Coach | Impactful eLearning
1wThis post is so insightful, shedding light on AI hype. It's essential to distinguish genuine AI tools from 'snake oil.' The book sounds like a valuable resource for navigating AI. I appreciate the analogy to historical "snake oil" salesmen. It's crucial to stay informed and make wise decisions. I invite you to our community to grow together using AI: https://nas.io/ai-growthhackers/. LinkedIn group: https://www.linkedin.com/groups/14532352/