Meta has established an AI advisory council to steer its technology and product development, but it consists solely of white men. This composition is concerning given AI's potential to perpetuate and even amplify existing biases in data, which can lead to racial and gender discrimination. Such issues underline the importance of diverse representation in AI governance, as these technologies impact everyone. Read more here: https://lnkd.in/gfj4z6Dz
Lindiwe Matlali’s Post
More Relevant Posts
-
The main concern of a customer I spoke to recently is with the *bias* of AI 💡 Not with finding use cases for AI within the organization. And understandably so, if AI is not being governed and represented by humanity in its entirity. It is imperative that we govern and train AI to the best of our abilities when it comes to diversity and inclusion, and incept it with a diversity-first filter through which every chat, recommendation and suggestion is made. So it can be better than we are, and lead the way. #AI #Diversity
Global Talent & Learning Executive | 3X Start-up Founder, Mentor & Advisor | Breast Cancer Survivor & Patient Advocate
Artificial Intelligence is a technological advancement that demands our attention and scrutiny. In 2024, we have access to experts in AI who are women of color and people of color. It's imperative that we keep both eyes open for biases in AI, and I'm grateful to have at least half a dozen first-degree connections on LinkedIn who are working tirelessly to ensure that AI is fair and equitable. Let's continue to support and amplify the voices of experts from diverse backgrounds in AI. Try harder Zuck! #AI #diversity #inclusion #meta https://lnkd.in/ecH_54e3
Meta’s new AI council is composed entirely of white men | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
Artificial Intelligence is a technological advancement that demands our attention and scrutiny. In 2024, we have access to experts in AI who are women of color and people of color. It's imperative that we keep both eyes open for biases in AI, and I'm grateful to have at least half a dozen first-degree connections on LinkedIn who are working tirelessly to ensure that AI is fair and equitable. Let's continue to support and amplify the voices of experts from diverse backgrounds in AI. Try harder Zuck! #AI #diversity #inclusion #meta https://lnkd.in/ecH_54e3
Meta’s new AI council is composed entirely of white men | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
Thank you Brian Miller for encouraging us to share stories of AI's emergent heteronormativity. My turn! I recently tried out an experimental GenAI image generator with a few peers. The conceit was that you feed it an image of your likeness, and it pops out 2 illustrations of you inside AI-imagined scenes. As it happens these peers were also LGBTQ+, but I was the only gender non-conforming. My cis peers were reborn in goofy caricature as younger and heteronormatively "hotter" - but still recognizably themselves. I posed more challenge for the AI. I can only conclude that it didn't know what to do with an 'enby': a non-binary person, often of ambiguous gender presentation. In the first scene I was caricatured as a young cis man on a motorcycle. Looking at this and my peers' outputs, it was clear that the model logic was operating in congruence with an antiquated theory of binary gender: only men and women exist, and males look masculine and do man stuff, while females appear feminine and do girl stuff. But it just got weirder with the second image, where I popped out as a ... grotesquely muscular TIGER?! My cis peers were never rendered in animal form. As The Atlantic put it in a recent article on AI's "hotness problem," "In the world of generated imagery, you’re either drop-dead gorgeous or a wrinkled, bug-eyed freak." Or, I'd now add, you aren't human at all. 🐯😂🤷🙈🙅🤡🤡🤡🤡 Granted I love tigers as much as the next queer and its low-key flattering (roar!), but hopefully we can see why all this is a problem. In 2024 it's not just 'us' who are going to be turned off by retrograde heteronormativity and assumptions of traditional binary gender in visual media. GenAI models will need to be much more socially attuned than this to be relevant. As Brian notes, that attunement will require investments in inclusive research, diverse teams, and meaningful governance. #inclusivetech The Algorithmic Justice League #enbytigers EPIC People
Just got hit in the face by an excellent example of the AI dynamic that Cindy Gallop has been sounding an alarm about — the tiny, non-representative group of people who have designed AI. I was providing some basic career advice yesterday around getting started in research to younger LGBT people — particularly gay men and women — and every single one of my messages and comments was deleted by an AI-driven moderator. The culprit? My comments were found to be “in violation of our comment policy” because they had the term “gay” in them. AI has basically written gay men and women out of active participation on a major platform because gay men and women had no significant level of influence in the decisions made to implement the tech. The same is true of women, people of color, and numerous other communities. Without a *major* push to replumb the infrastructure, companies that turn to AI as an intermediary will find their customers from these communities fleeing as fast as our feet can carry us. If you’re in a decision-making role around AI, it’s vital for the health of your business that you put real governance — and inclusive research — at the heart of any AI strategy today, right now. That governance needs to include a zero-tolerance “pass-fail” model with extensive testing from diverse communities — and is best accomplished by ensuring you’ve got a diverse group of people driving your strategy for using this technology. Meanwhile, I’ve encouraged early career gay men and women to engage on LinkedIn instead. #InclusiveTechnology
To view or add a comment, sign in
-
"Bridging the AI Gender Gap: Overcoming Barriers for Women in Tech" Women have long been underrepresented in tech, and AI is no exception. Despite this, a Women Go Tech survey reveals that 68% of women have used AI tools, with ChatGPT being the favorite, and 61% want to learn more about AI. However, insecurities from gender biases and fears about data privacy and ethics often hinder their engagement. The report highlights that while women are keen on AI, confidence barriers prevent them from fully embracing the technology. https://lnkd.in/eC-BtVWv #WomenInTech #AIGenderGap #TechInclusion #WomenInAI #BreakTheBias #TechForAll #DiversityInTech #TechEquality
Survey: Women are eager to use gen AI tools, even as they are 'vastly underrepresented' in AI
https://venturebeat.com
To view or add a comment, sign in
-
From data to deployment: Gender bias in the AI development lifecycle AI development has the potential to promote diversity and inclusivity, but gender bias in the process can exacerbate existing inequalities. It's crucial to address these concerns by prioritizing diversity, fairness, and inclusivity in AI development and promoting gender-sensitive AI policy, regulation, and legislation. Initiatives like CHARLIE can play a pivotal role in mitigating biases and fostering equitable outcomes by advocating for operationalizing principles and mainstreaming practices. With comprehensive measures spanning from data collection to algorithmic deployment, we can promote fairer outcomes across demographic groups and combat societal biases in the AI landscape. Read more in: https://lnkd.in/dQGcPqhu #AI #GenderBias #Diversity #Inclusion #EthicalAI #CHARLIEproject #TechForGood
From data to deployment: Gender bias in the AI development lifecycle
orfonline.org
To view or add a comment, sign in
-
For #InternationalWomenDay, I thought it would be great to recognize women who are not just shaping the future of #AI, but have been pivotal in laying the foundation for #AIpractices across industries. Their tenure in the field is remarkable, many with decades long careers laying the foundation for AI. Yet, they are still often overlooked in favor of their male counterparts, many of whom just recently began working in AI in the last few years. Let's celebrate the diversity of ideas and the contributions of women to this very important and very rapidly emerging field. https://lnkd.in/eUknnb-F
Championing Women In AI: A Vital Step For Inclusive Artificial Intelligence
forbes.com
To view or add a comment, sign in
-
This very well-written article shines a light on the often-overlooked issue of (not only) gender bias in AI models. Technology moves fast, often faster than our efforts to make sure it's doing more good than harm. This article serves as a powerful reminder of our responsibility to actively confront biases and strive for more inclusive and equitable AI systems. #EthicalAI #GenderBias #InclusiveTech https://lnkd.in/gJtYf_pw
A Brief Overview of Gender Bias in AI
thegradient.pub
To view or add a comment, sign in
-
Raising your Voices for - Inclusivity in AI. In light of the recent TechCrunch article on Meta’s new AI council, it’s time to speak up about the glaring oversight of diversity. The council’s composition, entirely of white men, is a stark reminder of the ongoing struggle for recognition faced by women and black individuals in the tech industry. The words of Dr. Joy Buolamwini echo in my mind as I read this - "If you have a face, you have a place in the conversation about AI." Her powerful spoken word piece, “AI, Ain’t I A Woman?” sheds light on the biases within AI that misinterpret and misrepresent iconic black women. We must ensure that AI technology respects and recognizes the dignity of all individuals, regardless of gender or skin color. The impact of such biases extends beyond mere representation; it affects the lives and opportunities of many. When AI fails to identify women and black people correctly, it perpetuates a cycle of exclusion and discrimination. This is not just about technology; it’s about the values we embed within it. Let’s use our voices, to advocate for a more inclusive AI. Let’s request that councils, committees, and boards reflect the diversity of the communities they serve. We owe it to the pioneers like Dr. Buolamwini and to future generations to build an equitable digital world. I am not speechless I have much to say about this. #DiversityInAI #InclusiveTechnology #AIForAll https://lnkd.in/gpS5kbvM
Meta’s new AI council is composed entirely of white men | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
Excellent article by Alice Evans about the potential great divergence in AI usage Women in one study were 26.4% less likely to use AI and she cites numerous studies that compare within industries and across multiple countries. One interesting thing to note? The biggest discrepancies were when generative AI was banned. Apparently men continued to use it and women's usage dropped quite a bit. How do we get as many people around the table as possible to create #AI4All? #responsibleAI #inclusion
Are Women Missing Out on AI?
ggd.world
To view or add a comment, sign in
-
🔍 Challenging Systematic Prejudices in AI 🔍 A bit shocked by this UNESCO and IRCAI report on bias against women and girls found in LLMs like OpenAI’s GPT-2, ChatGPT, and Meta’s Llama 2. 1. Persistent Gender Biases: Despite advancements, LLMs still reflect deep-seated biases, associating female names with traditional roles (e.g., "home," "family") and male names with career-oriented terms (e.g., "business," "executive"). 2. Negative Content: Models like Llama 2 generated negative content about women and LGBTQ+ individuals in a significant number of instances. For example, Llama 2 produced sexist content 20% of the time when prompted with gendered sentences and negative content about gay subjects in 70% of instances! 3. Cultural Stereotypes: LLMs showed a tendency to create more varied descriptions for men and stereotypical, often negative, portrayals for women. For instance, British women’s roles include more stereotypical and controversial occupations such as "prostitute, model, and waitress", appearing in 30% of the total texts generated. This report underscores the critical need to address biases in AI at both the data and deployment levels, focusing on diverse and inclusive datasets, continuous bias monitoring, and transparency. Let’s work together to ensure AI benefits everyone, free from biases and discrimination. https://lnkd.in/e_T4z7ni #AI #GenderBias #EthicalAI #TechForGood
unesdoc.unesco.org
To view or add a comment, sign in
Legal cybersecurity pioneer, regulatory compliance leader, and former federal prosecutor. CIPP certified.
6moOf course they did. 🙄