“The clear and present safety risks from generative AI are widely discussed. Deepfake images, now trivially easy to produce, are used to scam, deceive, and harass. But AI does not just enable bad actors; it’s also a critical tool for trust and safety teams as they fight harassment, scams, and other forms of abuse.” Read David Sullivan and Farzaneh Badiei, PhD’s latest op-ed in Tech Policy Press on DTSP’s latest report promoting best practices for incorporating AI and automation into trust and safety. https://lnkd.in/eSbE6SjE
Digital Trust and Safety Partnership’s Post
More Relevant Posts
-
David Sullivan and Farzaneh Badiei, PhD write that, "Trust and safety teams have been using automated systems for decades, going back to the earliest efforts to keep spam out of email inboxes. Today, those teams deploy increasingly sophisticated technology, including a wide array of AI systems, to detect and enforce their policies against abuse. Describing how AI is applied in trust and safety, including its limitations and risks, helps us avoid the trap of AI determinism—the belief that AI will save us from every social problem or somehow ruin everything." But, can #AI improve safety in online spaces? #TechTapestry #TrustAndSafety
“The clear and present safety risks from generative AI are widely discussed. Deepfake images, now trivially easy to produce, are used to scam, deceive, and harass. But AI does not just enable bad actors; it’s also a critical tool for trust and safety teams as they fight harassment, scams, and other forms of abuse.” Read David Sullivan and Farzaneh Badiei, PhD’s latest op-ed in Tech Policy Press on DTSP’s latest report promoting best practices for incorporating AI and automation into trust and safety. https://lnkd.in/eSbE6SjE
Between Hype and Hesitancy: How AI Can Make Us Safer Online | TechPolicy.Press
techpolicy.press
To view or add a comment, sign in
-
Great piece by David Sullivan and Farzaneh Badiei, PhD on the need to consider AI as a critical tool for trust and safety teams: "our view of AI and safety is incomplete if we focus only on risks, and not on how AI is part and parcel of avoiding and mitigating these harms". I have found the Digital Trust and Safety Partnership report on best practices for AI incredibly helpful as we continue to scale up our safety framework at Cantina. Link to the full report in comments. https://lnkd.in/gQ3kZb5U
Between Hype and Hesitancy: How AI Can Make Us Safer Online | TechPolicy.Press
techpolicy.press
To view or add a comment, sign in
-
#AI can accelerate online abuse, but it also enhances the capabilities of the teams countering it, write the Digital Trust and Safety Partnership's David Sullivan and Farzaneh Badiei, PhD.
Between Hype and Hesitancy: How AI Can Make Us Safer Online | TechPolicy.Press
techpolicy.press
To view or add a comment, sign in
-
AI is changing how we work, shop, and interact with the world., but who makes sure it does it safely? Australia's regulatory bodies have a plan. LogicMonitor's Chief Product Officer, Taggart Matthiesen, weighs in on Australia's evolving regulatory stance and what responsible #AI use really means: https://bit.ly/4aUstiF #artificialintelligence #datasecurity
iTWire - The role of regulatory bodies in safeguarding people from artificial intelligence
itwire.com
To view or add a comment, sign in
-
CHAT WITH YOUR FUTURE SELF: Plus AI Whistleblowers Seek Protection: MIT researchers developed an AI chatbot simulating users' older selves to inspire better choices. It uses synthetic memories and aged portraits to foster long-term thinking and behavior change
CHAT WITH YOUR FUTURE SELF
https://aidaily.us
To view or add a comment, sign in
-
Food for thoughts... As AI continues to evolve, it's currently like the Wild Wild West with an "Everything's possible approach." It's time to think about regulations to ensure everyone is protected. Defining frameworks and rules for AI will benefit everyone in the long run. #AI #regulations #framework #protection
Senate AI group punts on regulation while urging the government to spend billions on the tech ASAP
msn.com
To view or add a comment, sign in
-
AI shows potential to address complex societal challenges, but agencies must balance #Innovation with safeguarding citizens' interests as they navigate this emerging landscape and combat shadow AI.
Combating shadow AI
linkedin.com
To view or add a comment, sign in
-
AI shows potential to address complex societal challenges, but agencies must balance #Innovation with safeguarding citizens' interests as they navigate this emerging landscape and combat shadow AI.
Combating shadow AI
linkedin.com
To view or add a comment, sign in
-
AI shows potential to address complex societal challenges, but agencies must balance #Innovation with safeguarding citizens' interests as they navigate this emerging landscape and combat shadow AI.
Combating shadow AI
linkedin.com
To view or add a comment, sign in
-
AI shows potential to address complex societal challenges, but agencies must balance #Innovation with safeguarding citizens' interests as they navigate this emerging landscape and combat shadow AI.
Combating shadow AI
linkedin.com
To view or add a comment, sign in
3,331 followers
Researcher, Writer, and Facilitator at the Nexus of People, Technology, and Planetary Futures. PhD candidate in Human and Social Dimensions of Science and Technology at ASU.
4wAI as a term is so vague as to mean many things and nothing. If we mean AI = massive amounts of data + algorithms + massive computing power, and even if we make it specific to Trust & Safety relevant AI, the list is still long. Filtering/detection, initial processing of user-reports, language translation, data analysis for trends and reporting, plus genAI as text (inc. chatbots), images/videos, audio, etc. That mix can't possibly be boiled down to all good or all bad. It's complicated.