Lee Cesafsky’s Post

View profile for Lee Cesafsky, graphic

UX Researcher at Meta

Thank you Brian Miller for encouraging us to share stories of AI's emergent heteronormativity. My turn! I recently tried out an experimental GenAI image generator with a few peers. The conceit was that you feed it an image of your likeness, and it pops out 2 illustrations of you inside AI-imagined scenes. As it happens these peers were also LGBTQ+, but I was the only gender non-conforming. My cis peers were reborn in goofy caricature as younger and heteronormatively "hotter" - but still recognizably themselves. I posed more challenge for the AI. I can only conclude that it didn't know what to do with an 'enby': a non-binary person, often of ambiguous gender presentation. In the first scene I was caricatured as a young cis man on a motorcycle. Looking at this and my peers' outputs, it was clear that the model logic was operating in congruence with an antiquated theory of binary gender: only men and women exist, and males look masculine and do man stuff, while females appear feminine and do girl stuff. But it just got weirder with the second image, where I popped out as a ... grotesquely muscular TIGER?! My cis peers were never rendered in animal form. As The Atlantic put it in a recent article on AI's "hotness problem," "In the world of generated imagery, you’re either drop-dead gorgeous or a wrinkled, bug-eyed freak." Or, I'd now add, you aren't human at all. 🐯😂🤷🙈🙅🤡🤡🤡🤡 Granted I love tigers as much as the next queer and its low-key flattering (roar!), but hopefully we can see why all this is a problem. In 2024 it's not just 'us' who are going to be turned off by retrograde heteronormativity and assumptions of traditional binary gender in visual media. GenAI models will need to be much more socially attuned than this to be relevant. As Brian notes, that attunement will require investments in inclusive research, diverse teams, and meaningful governance. #inclusivetech The Algorithmic Justice League #enbytigers EPIC People

View profile for Brian Miller, graphic

Experience Research, Consumer Insights & New Product Innovation

Just got hit in the face by an excellent example of the AI dynamic that Cindy Gallop has been sounding an alarm about — the tiny, non-representative group of people who have designed AI. I was providing some basic career advice yesterday around getting started in research to younger LGBT people — particularly gay men and women — and every single one of my messages and comments was deleted by an AI-driven moderator. The culprit? My comments were found to be “in violation of our comment policy” because they had the term “gay” in them. AI has basically written gay men and women out of active participation on a major platform because gay men and women had no significant level of influence in the decisions made to implement the tech. The same is true of women, people of color, and numerous other communities. Without a *major* push to replumb the infrastructure, companies that turn to AI as an intermediary will find their customers from these communities fleeing as fast as our feet can carry us. If you’re in a decision-making role around AI, it’s vital for the health of your business that you put real governance — and inclusive research — at the heart of any AI strategy today, right now. That governance needs to include a zero-tolerance “pass-fail” model with extensive testing from diverse communities — and is best accomplished by ensuring you’ve got a diverse group of people driving your strategy for using this technology. Meanwhile, I’ve encouraged early career gay men and women to engage on LinkedIn instead. #InclusiveTechnology

Maya Ninova, PhD

Social scientist | Research | Behavioural science

10mo

Unfortunately, tech development has always been and still is heteronormative. With the massive AI usage and promotion we face even bigger challenge. The one of establishing as a norm for the masses well-known cognitive bias, that many fight against for decades, accumulating and sharing critical voices, that have had some impact with the potential to guide us in the right direction towards a more inclusive and diverse society. I was scared during Covid of the fact how fast some of our rights were flushed down the toilet, but right now, I am terrified of the possible impact that AI could have on very fragile but crucial issues, such as sex, gender and ethnicity, to mention a few. At the end the current development of the AI is nor more than descriptive statistics, a very narrow AI, very, very far from what people imagine when talking about AI.

Wow Lee, thank you so much for sharing this story and your experience. If you are interested, we hope you'll share your story with us at https://report.ajl.org/. We hear about all different kinds of AI harms, and the biases at the intersection of gender identity and AI is incredibly important for us to understand.

Like
Reply
Brian Miller

Experience Research, Consumer Insights & New Product Innovation

10mo

Lee, that’s really just an awful experience. Thanks for sharing your story, it is all-too-common right now 😣

See more comments

To view or add a comment, sign in

Explore topics