AI models are struggling to accurately answer election-related questions in Spanish. That’s according to a new study from the AI Democracy Projects, a collaboration between Proof News, fact-checking service Factchequeado, and the Institute for Advanced Study in San Francisco. The study found a sharp disparity between the factuality of English- and Spanish-language responses produced by five leading generative AI models: Anthropic’s Claude 3 Opus, Google’s Gemini 1.5 Pro, OpenAI’s GPT-4, Meta’s Llama 3, and Mistral’s Mixtral 8x7B v0.1. — TechCrunch #AI #LLMs #espanol #spanish #elections #democracy #anthropic #googlegemini #openAI #meta #llama3 #mistral
Eric Gould’s Post
More Relevant Posts
-
AI models are struggling to accurately answer election-related questions in Spanish. That’s according to a new study from the AI Democracy Projects, a collaboration between Proof News, fact-checking service Factchequeado, and the Institute for Advanced Study in San Francisco. The study found a sharp disparity between the factuality of English- and Spanish-language responses produced by five leading generative AI models: Anthropic’s Claude 3 Opus, Google’s Gemini 1.5 Pro, OpenAI’s GPT-4, Meta’s Llama 3, and Mistral’s Mixtral 8x7B v0.1. Given the same 25 prompts in English and Spanish, 52% of the responses from the models to the Spanish queries contained wrong information, compared to 43% of the responses to the queries in English. The study highlights the surprising ways in which AI models can exhibit bias — and the harm that bias can cause... #artificialintelligence #elections #spanish
AI models get more election questions wrong when asked in Spanish, study shows | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
AI-generated deceptive text aimed at disinformation has never been more convincing than before, says Jennifer Woodard. Full interview: infy.com/3z9zXB8 Jennifer points out that while the tactics of disinformation and foreign interference campaigns have remained consistent, AI has amplified their scale and sophistication. In conversation with Kate Bevan. Jennifer Woodard is the co-founder and CEO of Insikt AI, a deep-tech startup researching applying AI and Machine Learning to unearth hidden insights within large datasets for customers in the government sector for fighting online harms such as disinformation, extremism and hate speech. Dataietica Infosys Topaz #AI #generativeAI #electioninterference #disinformation #democracy #politics
To view or add a comment, sign in
-
How will Generative AI (#GenAI) impact democracy? Our new International Republican Institute (IRI) white paper analyzes key threats and risks that GenAI poses to democracy, and summarizes opportunities for civil society and policymakers to leverage it for good. 👇 https://lnkd.in/ehWHWFaC
Democracy in the Age of Generative AI: A White Paper
iri.org
To view or add a comment, sign in
-
Fantastic reporting by Alex Boyd on our red-teaming research on LLMs in elections with Applied AI Institute Concordia University Our simple tests exposes real gaps in AI governance in Canadian politics. The article is a great introduction and builds on our report (https://lnkd.in/ee7aSXhv) Boyd writes, "...when a flurry of bot-like posts appeared, seemingly out of nowhere, in the wake of a Pierre Poilievre event in the small Ontario town of Kirkland Lake, it got some researchers thinking... They wondered not just about who created the messages, but how. It seemed a bit unlikely this was the work of a serious political party — for one thing, the messages were just too obviously inauthentic — but they’d clearly been churned out by the dozens, and fast. To test this question, researchers decided to give it a try." Beautiful set-up. Read the whole piece here: https://lnkd.in/eDUhNy53 A team effort with Scott DeJong Colleen McCool Robert Marinov Elizabeth Dubois Jeremy Clark Jun Yan and especially Hélène Huang who did some extra work on the report!
AI companies say their tools can’t be used for political interference. Here’s what happened when researchers tried
thestar.com
To view or add a comment, sign in
-
"The preservation and revitalization of linguistic and cultural diversity in AI is not just a technological challenge—it's a moral imperative. As we enter an era shaped by artificial intelligence, we must ensure this transformative technology serves all of humanity, not just the dominant majority." These words capture a simple yet complex truth. Earlier this week, we launched the attached report at an OpenAI-hosted event in New York, alongside the United Nations General Assembly 2024. It was inspiring to see a shared focus from OpenAI leadership, high-level UN officials, and global tech entrepreneurs—highlighting the crucial need for low-resource language inclusion in AI. This report is both a call to action and a source of inspiration. Our aim is to spark collaboration between AI developers, language communities, and policymakers to build an AI-driven future that embraces everyone. The time to act is now—let’s shape an inclusive AI landscape together! // Nánar í frétt á íslensku á heimasíðu Almannaróms: https://almannaromur.is Almannarómur: Miðstöð um máltækni | Voice of the People: Center for Language Technology , Miðeind, Menningar- og viðskiptaráðuneytið, OpenAI, United Nations
To view or add a comment, sign in
-
Unearthing valuable insights takes time and effort. But with the right tools and expertise, you can save time and dive deeper than ever before. The real magic happens when you combine the power of #AI with the art and science of human analysis. As noted in Politico, AI systems are only as good as the data they are trained on and the humans who design and deploy them. That's why at EmpathixAI, we've built a team of experts in anthropology, behavioral science, and data science to ensure our AI interviewer, #CultureChat, is designed to uncover genuine, actionable insights. AI can surface themes and patterns, but it's up to the researcher to identify the most meaningful insights and understand their implications for human behavior. There are no shortcuts in deep thinking. The key is to never lose sight of the human element. Great insights come from understanding people, not just data points. And that's where the real value lies.
"AI needs the non-quants too", says Politico. In a new policy brief from Data & Society Research Institute, Serena Oduro and Tamara Kneese, Ph.D. argue that "AI Governance Needs Sociotechnical Expertise". They make the case for expertise from the humanities and social sciences being front and center in governance design, deployment and evaluation -- particularly in the current "AI talent surge" happening across the federal government. And bonus! Check out our friends at the Center for Democracy & Technology: Miranda Bogen and Amy Winecoff just released the related, "Applying Sociotechnical Approaches to AI Governance in Practice". https://lnkd.in/em2NFBep
AI needs the non-quants, too
politico.com
To view or add a comment, sign in
-
Archibot: The AI Revolutionizing Access to European Parliamentary Archives The European Parliament has taken a significant leap towards enhancing transparency and accessibility with the launch of its innovative AI assistant, Archibot, powered by Anthropic’s Claude. This groundbreaking initiative, dubbed ‘Ask the EP Archives,’ aims to make over 2.1 million official documents easily available to researchers, policymakers, educators, and the public, marking a pivotal moment in how democratic history is preserved and accessed. A Glimpse into the Archives The Archives Unit of the European Parliament has been diligently working to preserve and manage a vast collection of documents dating back to 1952. For years, archivists faced challenges in making this wealth of information easily searchable and accessible to the public. The traditional methods of navigating these archives often involved painstaking searches through physical documents, which could take weeks. However, with the advent of generative AI, the game has changed. for the full article follow the link... https://lnkd.in/gqN_fhSw #Archibot #EU #EuropeanParliament #generativeai #AnthropicAI #Claude #AIAssistant #AI #AINews #AITrends #artificialintelligence #trends #news #DhakaAI
Archibot: The AI Revolutionizing Access to European Parliamentary Archives
https://dhaka.ai
To view or add a comment, sign in
-
📝 Announcing our new paper that explores Socioeconomic Biases in LLMs, revealing their lack of empathy towards the socioeconomically underprivileged 🔹 "Born With a Silver Spoon? Investigating Socioeconomic Bias in Large Language Models" 🔹 In collaboration with the The University of Texas at Austin 🔹 Paper: https://lnkd.in/gpTZaMRY 🔹 Code: https://lnkd.in/giAQF7F3 ➡️ Contributions: • Pioneering the "Silverspoon" Dataset: We embark on a pioneering examination of socioeconomic biases in large language models by leveraging the novel "Silverspoon" dataset, comprising 3,000 samples that showcase scenarios involving underprivileged individuals engaging in ethically ambiguous actions. • Dual Label Annotation: This dataset uniquely incorporates dual labels annotated by individuals from varied socioeconomic backgrounds, enabling a nuanced analysis of LLMs' biases. • Findings of Insensitivity: The study's findings illuminate a striking insensitivity of LLMs towards the plights of the socioeconomically disadvantaged, regardless of the context, thereby calling attention to the critical need for addressing these biases to ensure fairness and empathy in AI applications. ✍🏼 Authors: Smriti Singh, Shuvam Keshari, Vinija Jain, Aman Chadha ✅ For more of my AI papers and primers, follow me on X at: http://x.aman.ai #artificialintelligence #genai #llms #research
To view or add a comment, sign in
-
Kenneth Cukier, deputy executive editor at The Economist and coauthor of popular books like Big Data and Framers, discusses the ideological divides around technology, stressing the importance of being discerning in how we adopt it. He points out that, similar to religion, technology can enhance both our virtues and vices, and understanding our mental frameworks is key to using AI effectively. Cukier champions "frame pluralism," encouraging a mix of perspectives to spark creativity and innovation while respecting both empirical truths and personal beliefs. He sees the interaction between AI and spirituality as a major theme of our time, making platforms like AI&F vital for these important conversations. Read his interview with AI and Faith’s founding member, David Brenner: https://lnkd.in/d_kBtVFZ
Featured Interview with Kenneth Cukier - AI and Faith
https://aiandfaith.org
To view or add a comment, sign in
-
Excited to announce the publication of my research paper! 📄 In an era dominated by #hyperpartisan news, our research focuses on detecting politically biased articles using cutting-edge AI techniques. We utilized two advanced models: - Fine-tuned #DeBERTa-v3, with an impressive accuracy of 94.4% - Bi-LSTM + FastText embeddings, achieving 89.2% accuracy Here's the abstract: #Hyperpartisan news frequently employs strategies like sensationalism, attention-grabbing or clickbait headlines, and excessively emotive language, which can deepen political divisions, spread misinformation, and erode public trust in the media—especially in today’s dynamic digital world. This media bias can polarize communities and influence election outcomes, weakening democratic processes and social cohesion. Our research tackles these challenges by detecting politically biased news using two methodologies: a fine-tuned #DeBERTa-v3 model and a #Bi-LSTM model trained on FastText embeddings, analyzing a comprehensive dataset of news articles across the political spectrum. The DeBERTa-v3 model achieved an accuracy of 94.4%, outperforming the Bi-LSTM+FastText model, which reached 89.2% accuracy. By contributing to a more balanced and reliable information ecosystem, this research aims to promote better-informed decision-making and reduce polarization in public discourse. Grateful to Sachin Rao and team for their support throughout this journey. 🙏 #AI #NLP #MachineLearning #MediaBias #Research #DeBERTa #HyperpartisanNews #PoliticalBias #BalancedInformation #PoliticalPolarization #BiLSTM #FastText
To view or add a comment, sign in