Chris Manko’s Post

View profile for Chris Manko, graphic

Seasoned CTO within Engineering, Product and Technical Solutions. People, wellbeing and culture at my heart! People before profit!

Interesting article and a reoccurring issue we face at Neural Voice with regards to guard rails.

View profile for Michal Stanislawek, graphic

Strategist & Solution Builder | Conversational & Generative AI | Live Media

New AI fear unlocked 😱 Jailbreaking language models isn't just fun and games anymore... It becomes a ticking time-bomb. Convincing LLM powered robots to ignore safety rules and potentially cause harm to humans is far too easy. Researchers from the University of Pennsylvania showcased that in their recent paper called "Jailbreaking LLM-Controlled Robots" What I find astounding is that commercially available robots lack simple user input sanitization before passing it over to the LLM. Detecting user intent is not rocket science anymore. And you can also use LLMs to help you with that. The level of ignorance when deploying LLMs in such use cases is truly shocking!

  • No alternative text description for this image

To view or add a comment, sign in

Explore topics