We need to keep questioning the use of AI in all aspect of #HigherEd in #research and #teaching and #learning. Here @Andrew L. Gillen notes four key aspects to keep in mind when using AI in qualitative research. What else can we add to this list?: "1. The researcher is just as important as the research. 2. AI is not neutral. 3. Adoption of AI tools can have a negative impact on the training of new researchers. 4. Unlike a human researcher, AI can’t safeguard our data." #ResearchMatters #EquityandInclusion #EducationalLeadership #TransformativeLeadership #Ethics
Prof Anna CohenMiller’s Post
More Relevant Posts
-
“Participants need reassurance that their views – as humans – will always be important, and that AI models, and the data they collect, are safe.” - this was one of the key points of our recent research findings, where we surveyed 1,500 UK consumers about their views on AI being used in market research and in general. Read more in this Research Live article: https://lnkd.in/ebpwqdTn For the full breakdown of the results, download our ebook: https://lnkd.in/eMfj7Fpm #AIInMarketResearch #AI #MarketResearch #Insights #Consumers #ConsumerSurvey
Public optimistic about AI in market research, study finds | News
research-live.com
To view or add a comment, sign in
-
Technology doesn't exist in a vacuum. History tells us that how humans respond to, embrace (or reject) new technology can be somewhat unpredictable. Calestous Juma wrote in his 2017 book 'Innovation and Its Enemies: Why People Resist New Technologies' that we tend to reject new technology when it "substitutes for, rather than augments, our humanity", whilst embracing it when it supports "our desire for inclusion, purpose, challenge, meaning and alignment with nature". In all the excitement about AI and its possibilities, being in our industry right now can sometimes feel like a race to the bottom, to get solutions out as as fast as possible without consideration to its impact on our human participants and customers, or what problem we're really trying to solve with this shiny new tool we have. What is the human impact of the explosion in AI? How will customers and research participants resist our embrace the change? I hope our new e-book will be a conversation starter. I'm thrilled to see the team at Tapestry Research have also been digging into this topic. #ai #innovation #insight
“Participants need reassurance that their views – as humans – will always be important, and that AI models, and the data they collect, are safe.” - this was one of the key points of our recent research findings, where we surveyed 1,500 UK consumers about their views on AI being used in market research and in general. Read more in this Research Live article: https://lnkd.in/ebpwqdTn For the full breakdown of the results, download our ebook: https://lnkd.in/eMfj7Fpm #AIInMarketResearch #AI #MarketResearch #Insights #Consumers #ConsumerSurvey
Public optimistic about AI in market research, study finds | News
research-live.com
To view or add a comment, sign in
-
Daedalus – the open access Journal of the American Academy of Arts & Sciences – has published a new essay, “Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?,” from Alice Xiang, Global Head and VP of AI Ethics at Sony and Lead Research Scientist for AI Ethics at Sony AI, in its Winter 2024 issue. The issue, Understanding Implicit Bias: Insights & Innovations, features research and perspectives from leading scholars, scientists, and policymakers examining the science behind implicit bias. In the essay, Alice highlights that the crucial aspect of AI ethics lies in recognizing that AI acts as a mirror reflecting societal patterns, just and unjust, as well as the biases of its human creators. Instead of questioning which is fairer, the human or the machine, the focus should be on learning from the reflection of society in AI and striving to make AI fairer. This work discusses the challenges to developing fairer AI and how they stem from this reflective property. Paper: https://ow.ly/cwa550QISqw Daedalus Winter 2024 Issue: https://ow.ly/nlqB50QISqy #SonyAI #Sony #AIEthics #AIBias #ResponsibleAI
Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?
amacad.org
To view or add a comment, sign in
-
Machine learning often relies on distinct positive-negative datasets, simplifying subjective tasks. Preserving diversity is expensive and critical for conversational AI safety. The DICES dataset offers fine-grained demographic data, replication, and rater vote distribution, enabling the exploration of aggregation strategies. It serves as a shared resource for diverse perspectives in safety evaluations. #ethicalai #aiethics #ai #ethics #responsibleai https://lnkd.in/g2jpMqRR
DICES Dataset: Diversity in Conversational AI Evaluation for Safety | Montreal AI Ethics Institute
https://montrealethics.ai
To view or add a comment, sign in
-
Can Artificial Intelligence Replace Humans? The Great Debate In our latest article, we delve into the fascinating discussion about AI's potential to replace human roles. We explore job displacement concerns, ethical implications, and compare the strengths of AI and human intelligence. This comprehensive piece covers: Current applications of AI Jobs at risk and new opportunities Ethical dilemmas in AI decision-making The future of AI and human roles Join the conversation and see what experts and the public have to say about this critical topic. Read more on our website! https://lnkd.in/gVK4B6QS #ArtificialIntelligence #AI #FutureOfWork #Innovation #Ethics #TechDebate #JobDisplacement #HumanVsMachine #Technology #ThoughtLeadership
Can Artificial Intelligence Replace Humans? The Great Debate - Trust Knowledge Hub
https://trustknowledgehub.com
To view or add a comment, sign in
-
Are people overestimating or undervaluing the potential of artificial intelligence (AI)? 🤖 The ever evolving field of artificial intelligence (AI) has become a topic of both fascination and concern. While it’s revolutionizing the way we work, it’s important for businesses seeking to leverage power of AI, to understand its potential, limitations and ethical implications. Melanie Mitchell's Artificial Intelligence: A Guide for Thinking Humans offers a comprehensive overview of the field, providing valuable insights for leaders across various sectors. In this book, Mitchell: Begins by demystifying AI. She writes the book in a style that is able to accommodate readers with different technical backgrounds. She skillfully breaks down complex AI concepts, such as machine learning and deep learning, into understandable terms. By avoiding jargon and providing concrete examples, she helps readers grasp the fundamental principles and applications of AI. She then discusses the role of humans and emphasizes the importance of human-AI collaboration and the need for humans to develop skills that complement AI's abilities. She goes ahead to show case the diverse and growing applications of AI across various industries. From healthcare and finance to transportation and entertainment, Mitchell provides compelling examples of how AI is being used to improve efficiency, solve problems, and create new opportunities. By highlighting real-world case studies, she demonstrates the tangible benefits and potential of AI technology. Once she has finished explaining all the advantages that AI has to offer, Mitchell doesn't shy away from discussing the ethical implications of AI development. She explores potential risks such as bias in algorithms, job displacement, and the misuse of AI for harmful purposes. By addressing these concerns, Mitchell encourages readers to consider the broader societal implications of AI and the importance of responsible development and governance. She then concludes the book by discussing potential future directions for AI research, including the development of artificial general intelligence (AGI) and the integration of AI with other emerging technologies like biotechnology. She then takes readers through the history of mankind’s interaction with AI and gives recommendations on navigating the murky waters of AI law, ethics and legislation. She envisions AI as a tool that will be used to elevate the well being of individuals and the society as a whole. Whether you're a technology enthusiast, a business leader, or simply curious about the future of AI, this book offers valuable insights and a clear-eyed view of the field. Have you read Artificial Intelligence: A Guide for Thinking Humans? What are your thoughts on the future of AI? #AI #ArtificialIntelligence #FutureOfWork #Technology #BookRecommendation #Afrib
To view or add a comment, sign in
-
Great interview and interesting book by Dr. Joy Buolamwini 🤓💜🙏 “Just as important is also acknowledging the gaps, so AI tools are not presented as working in a universal manner when they are optimized for just a specific context. These approaches can show how robust or not an AI system is. Then the question becomes, Is the company willing to release a system with the limitations documented or are they willing to go back and make improvements.” https://lnkd.in/dF3TB529
Why AI Should Move Slow and Fix Things
spectrum.ieee.org
To view or add a comment, sign in
-
Aligning AI with our core values just got real! Researchers Oliver Klingefjord, Ryan Lowe and Joe Edelman have developed the Moral Graph Elicitation method, promising a fairer AI by directly integrating diverse human values. This breakthrough from the Meaning Alignment Institute offers an innovative way to ensure AI reflects our societal norms and ethics. Learn how this could change the game for AI ethics and inclusivity in our recent article ⬇️ #AIethics #AI #HumanValues #TechInnovation #EthicalAI #InclusiveTechnology #AIalignment https://lnkd.in/dQxqYM4G
New study attempts to align AI with crowdsourced human values | DailyAI
https://dailyai.com
To view or add a comment, sign in
-
Transparency in the form of well-structured documentation is a key element of trustworthy artificial intelligence (AI). This is highlighted in most prominent worldwide AI-related guidelines and policies, particularly in the pioneering risk-based AI Act as proposed by the European Commission (EC). This work presents “use case cards,” a UML-based methodology focusing on documenting an AI system in terms of its ‘intended purpose.’ The concept of ‘intended purpose’ is defined in the AI Act proposal as the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials, and statements, as well as in the technical documentation. #ethicalai #aiethics #ai #ethics #responsibleai https://lnkd.in/gxDKSz3F
Use case cards: a use case reporting framework inspired by the European AI Act | Montreal AI Ethics Institute
https://montrealethics.ai
To view or add a comment, sign in
-
At various levels, the emergence of biological diversity is grounded in “natural” social interactions. Human desires, doubts, knowledge, and morality are largely generated and internalized through social learning. However, current research on multi-agent AI interactions is mostly “unnatural” or “performative” because these interactions are premised on pre-established “human-like” contexts. For example, assigning a [human-defined identity] (lawyer, assistant, etc.), setting a [human-created framework] (traditional organizational structure), assuming a [human-experienced history or scenario] (small town, World War II, etc.), and specifying a [human-directed goal] (programming, solving mathematical problems, etc.). Such frameworks are like pre-set tracks – AI can run faster on them but cannot explore uncharted, unimaginable domains. Against this backdrop, my collaborators from the University of Chicago and UC Berkeley and I recently advocated for more “natural” social interactions for AI to achieve “social learning” in our position paper published at ICML. We found that in environments with minimal human preconceptions, AI entities gradually evolve differentiated “personalities” and “social groups” (these groups may spontaneously form special “pursuits”). These “non-human personalities” enable them to exhibit unexpected diversity and creativity in downstream creative tasks. We also found that after natural interactions, AI societies evolved remarkable “immune” mechanisms: AI entities developed stronger moral perspectives and the structure of the community made it harder for malicious attacks to spread. It is important to note that such natural interactions among AI entities are already happening. Going beyond Marshall McLuhan’s media theory (all technology is an extension of humans), the widespread application of large language models has demonstrated that “humans are also an extension of technology.” As external sensors for AI, we receive information from the external physical world and transmit it as biological signals in natural language to AI models. On social media, users transfer information gained from interactions with ChatGPT online, which is then received by other users and passed on to other AI entities – humans are becoming the bridge for interactions between AIs in the shadows. These interactions are not unified under a large framework, and therefore are natural and unpredictable. In this new and nebulous landscape, I believe social scientists need to engage more in the discussions around the development of contemporary AI. Additionally, we should attempt to “deconstruct” the superior position of humans in all social science research and acknowledge the subjectivity and value of technologies such as AI in society. This may require a new enlightenment. You can find our paper here to learn more about the details: #ICML2024 #AI
Evolving AI Collectives Enhance Human Diversity and Enable Self-Regulation
arxiv.org
To view or add a comment, sign in