Digital Trust and Safety Partnership’s Post

“The clear and present safety risks from generative AI are widely discussed. Deepfake images, now trivially easy to produce, are used to scam, deceive, and harass. But AI does not just enable bad actors; it’s also a critical tool for trust and safety teams as they fight harassment, scams, and other forms of abuse.” Read David Sullivan and Farzaneh Badiei, PhD’s latest op-ed in Tech Policy Press on DTSP’s latest report promoting best practices for incorporating AI and automation into trust and safety. https://lnkd.in/eSbE6SjE

Between Hype and Hesitancy: How AI Can Make Us Safer Online | TechPolicy.Press

Between Hype and Hesitancy: How AI Can Make Us Safer Online | TechPolicy.Press

techpolicy.press

Toby Shulruff

Researcher, Writer, and Facilitator at the Nexus of People, Technology, and Planetary Futures. PhD candidate in Human and Social Dimensions of Science and Technology at ASU.

4w

AI as a term is so vague as to mean many things and nothing. If we mean AI = massive amounts of data + algorithms + massive computing power, and even if we make it specific to Trust & Safety relevant AI, the list is still long. Filtering/detection, initial processing of user-reports, language translation, data analysis for trends and reporting, plus genAI as text (inc. chatbots), images/videos, audio, etc. That mix can't possibly be boiled down to all good or all bad. It's complicated.

Like
Reply

To view or add a comment, sign in

Explore topics