When working with LLM, one of the major issues is having false or incomplete answers. This is called hallucination. Resolving these issues will be difficult because of the undeterministic nature of LLMs. In this article, researchers propose a way to reduce these issues. You can read the original paper at this https://lnkd.in/eSyNyFPk or read the summary from DeepLearning.AI at this https://lnkd.in/ekNEkgcJ.
Reda Senouci’s Post
More Relevant Posts
-
Good Morning All, as part of the Level 5 HND in Computing & Systems Development (Year 2) I have been conducting research into the topic of Big Data. My question is, An Investigation into the reliance on AI (Artificial intelligence), specifically deep-learning technologies, focusing on aspects of legal and ethical issues for the storage and manipulation of Big Data. Therefore I have created a questionnaire to conduct primary research on the topic, I would greatly appreciate it if you could fill out the questionnaire as it would aid in my research project 🙏
An Investigation into the Reliance on AI (Artificial intelligence), specifically Deep-Learning Technologies, focusing on aspects of legal and ethical issues for the storage and manipulation of Big Data.
docs.google.com
To view or add a comment, sign in
-
The Sequence is a reliable source of technology information on innovations in Generative AI. In this post, they provide a summary of the 13 recent issues on LLM reasoning. Really nice reference resource. https://lnkd.in/gRHriPQx
Edge 379: A Summary Of Our Series About LLM Reasoning
thesequence.substack.com
To view or add a comment, sign in
-
"The authors of Feeding the Machine, James Muldoon, Mark Graham, and Callum Cant [...] describe A.I. as 'an extraction machine that feeds off humanity’s collective effort and intelligence, churning through ever-larger datasets to power its algorithms.' The purpose of their investigation was, 'to give voice to the people whom A.I. exploits, revealing how their dangerous, low-paid labor is connected to longer histories of gendered, racialized & colonial exploitation.'" I registered to the event and am looking forward to attending! See details below; it will take place on Thursday, December 19, 2024, 5:00pm - 6:00pm (Cambridge time, late in the night in Europe but a must): https://lnkd.in/d2E4vQEr One question by registering was the following one: "Do you feel more people should know the true human costs of A.I. before we proceed further?" If you have followed my posts on this platform and latest publications, you already know my answer. It's why I continue criticizing educational institutions and educators (just to name the education field but it applies to *all other fields and areas* possible) rushing to buy licenses and using such technologies *without* a second thought about the harms of those technologies *BEFORE* buying or using them. And the new colonial exploitation and low-paid labor are just two of the many issues!
FEEDING THE MACHINE - The Hidden Human Labor Powering AI
wgbh.org
To view or add a comment, sign in
-
H/T to Cory Doctorow on this one. A good piece on the flaws that humans in the loop make when overseeing AI: "Humans in the loop experience “a diminished sense of control, responsibility, and moral agency.” That means that they feel less able to override an algorithm — and they feel less morally culpable when they sit by and let the algorithm do its thing. All of these effects are persistent even when people know about them, are trained to avoid them, and are given explicit instructions to do so. Remember, the whole reason to introduce AI is because of human imperfection. Designing an AI to correct human imperfection that only works when its human overseer is perfect produces predictably bad outcomes. As Green writes, putting an AI in charge of a high-stakes decision, and using humans in the loop to prevent its harms, produces a “perverse effect”: “alleviating scrutiny of government algorithms without actually addressing the underlying concerns.” The human in the loop creates “a false sense of security” that sees algorithms deployed for high-stakes domains, and it shifts the responsibility for algorithmic failures to the human, creating what Dan Davies calls an “accountability sink”: https://lnkd.in/e5yTnYvZ The human in the loop is a false promise, a “salve that enables governments to obtain the benefits of algorithms without incurring the associated harms.”" The book on accountability sinks that is really worth reading is: "The Unaccountability Machine" by Dan Davies.
The Flaws of Policies Requiring Human Oversight of Government Algorithms
papers.ssrn.com
To view or add a comment, sign in
-
🚀 Google's Alternative to RAG: Retrieval Interleaved Generation (RIG) Imagine if LLMs could answer real-time questions like: 1. What's the current population of NYC? 2. How many COVID-19 cases were reported last week? Google’s new approach, Retrieval Interleaved Generation (RIG), integrates LLMs with Data Commons, an open-source public data repository, grounding LLMs in accurate, up-to-date information. 🔍 Researchers explored two methods: • RIG: LLMs generate queries to fetch data. • RAG: Data enhances LLM prompts. Both methods significantly improved accuracy! Learn more on LLM Watch: https://lnkd.in/gNhBf6nr #AI #RAG #RIG #Google #DataCommons #TechInnovation
To view or add a comment, sign in
-
A new #plugin uses #AI to help us remain safe in front of disinformation produced by #GenerativeAi: The SkepticReader. It's free. 💬 Consider this plugin a ‘beta test’ in the wild— an open invitation to join us in the lab. Let’s poke at the problem with a stick to see what moves, and find out if we can invert the use of GEN-AI to help us ask better questions, instead of churning out tonnes of bullshit answers. 💬 https://lnkd.in/eDaX-uUh Chris Moran Chris K. --- ☝️ I share insights and perspectives on the evolution & impact of generative AI and the developing challenges of distinguishing reality from artificiality. Follow me on LinkedIn to discover where we're at, where we're heading, and the measures we can take while observing the evolution of our technological landscape.
SkepticReader - Chrome plugin with real-time bias detection
https://www.skepticreader.domesticstreamers.com
To view or add a comment, sign in
-
One of the more interesting things about Elon Musk's new lawsuit against OpenAI and Sam Altman is that many of the allegations depend upon the question of whether the newest AI models have reached #AGI - artificial general intelligence. As is the American way, we will ask a jury of ordinary citizens to answer this question. If you are interested in this topic, consider joining us at University of Missouri-Columbia, School of Law March 7-8 for an in depth discussion of AI and Society - focusing on the role of government. https://lnkd.in/gbfTyCiQ
Truman School of Government and Public Affairs
truman.missouri.edu
To view or add a comment, sign in
-
Partnership on AI has joined Center for Democracy & Technology and Mozilla as well as civil society organizations and academics in signing an open letter urging US Commerce Secretary Raimondo to protect openness and transparency in AI. https://lnkd.in/gyYT_DtW Open foundation models can bring great benefits, including sharing the benefits of foundation models more widely, greater transparency, and fostering research and innovation. Like closed foundation models, open models can present risks and need appropriate risk management practices that are tailored to any risks. PAI’s Guidance for Safe Foundation Model Deployment provides recommendations on how to responsibly develop and deploy foundation models, with insights on risk management for both open and closed models. Learn more about this work: https://lnkd.in/gxVaEN6h
CDT and Mozilla Join Civil Society Orgs and Leading Academics to Urge the Secretary of Commerce to Protect AI Openness
https://cdt.org
To view or add a comment, sign in
-
#Claude: Everything you need to know about @AnthropicAI They make mistakes when summarizing or answering questions. Trained on public web data, some of which may be copyrighted or under a restrictive license. https://buff.ly/4eGjQdP #hallucinations #GenerativeAI
Claude: Everything you need to know about Anthropic's AI | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
The Internet has been abuzz with one question: is Gemini too woke for its own good? Well—not trying to be wishy-washy—the answer is not as clear as those on the red or blue side of this debate will have you believe. I’ve said it before: sociologists should be using LLMs to investigate the very biases that are embedded in the culture and in its byproduct. Gemini’s most recent gaffe both proves my point and serves as a cautionary reminder of the complexity of these systems and of the law of unintended consequences (well, I like it better as Murphy’s Law, but ok). I suppose it’s in principle admirable that Google was attempting to ensure more representation in image generation when it decided to mess with prompt injection in response to users requesting images of people. Sure, it helps everyone when a request for images of doctors returns a photo of people from many different backgrounds, so as not to perpetuate harmful stereotypes of different racial groups in different socio-economically ranked professions. Except the effects of it rippled far and wide. That goal, of course, is itself a perspective, and, à la Winter Break Hypothesis, all our perspectives get encoded in LLMs—consciously or not. In effect attempting to fix bias with more bias, that—I’m afraid—could never be the solution. The resulting images of non-white Vikings or America’s founding fathers were but an obvious example of such perspective sneaking in. Some saw it as a humorous blunder, others as the signal of a pernicious agenda. In reality, it’s just bias of a different nature, which is reflected in the culture of 2024, and it, also, got jammed in the model. What’s worse, when dealing with matters of objective or historical truth, seemingly innocuous biases can turn into epistemological nightmares. I think what’s most important to take away from this situation, however, is not just that we need to be aware of our biases when preparing training data, red teaming LLMs, training models, or setting policy guidelines. It’s that when we discover our biases reflected in the output of AI models—and we will, because biases are all around us—it’s an opportunity for us to surface them, notice them, call attention to them, and exhort users to self-reflect and examine our behavior and assumptions, and—hopefully—make a change. In other words, I want to see it all, the good, the bad, and the ugly, and be reminded that more than one way to look at the world exists. I certainly don’t want to see reality through a filter. The next time we spot our biases looking back at us from the output field of an LLM, it might not be so humorous or indicative of good intentions. We might not even know or notice. That’s when we will have relinquished the power to really *know*. How will we use this information to grow, instead? #Gaingels #AI #ArtificialIntelligence #Gemini
To view or add a comment, sign in