I've been made aware of an article that is doing the rounds at the moment, thanks Frith Tweedie and aimee whitcroft and others. Maybe I've been doing this too long, and have ended up cloistering myself within a range of like minded individuals; I like to think I have some patience, that 'AI is new' that 'AI governance is evolving', and that implementers are getting the messages. I get excited when I see new capabilities that have potential, and try to call out when people do dumb stuff. However when I see and read articles like the below (if true), where it appears flawed GenAI implementations have the potential to impact on mental health and well being, I despair, and to be honest, get angry. If true, this is an indictment on the vendor, the government agency, the NZ Government itself, and every company involved in getting this product into the public realm...the lack of guard rails, the extent to which it could be manipulated, insufficient testing, the nature of the outputs...Mental health is NOT an area that you can afford to get wrong; while there MAY be a place for AI in psychology and therapy, a failure to even disclose that it is an AI chatbot is just disgraceful. Some of the article includes NSFW content. If the builders don't believe that users (including teenagers) wouldn't find out how to manipulate some of this capability for themselves, they're naive. Sorry for the rant, but the shock of seeing that this (if true) made it live at all is just unconscionable...I look forward to any validation of findings or otherwise... https://lnkd.in/g3a4QNXZ
Well said Peter
Oof. Concerning read. I found the more structured and guard-railed approach described in this article "Do We Dare Use Generative AI for Mental Health?" rather reassuring in comparison https://spectrum.ieee.org/woebot
Arggghhhghhg
AI | Data Science | Software Development
2moSo easily tested, and made more robust, it’s pretty bad it made it to prod in that state.