Did you hear about the tiny AI robot in Shanghai that “convinced” 12 larger robots to follow it home? Seems like a quirky story from a robotics showroom test sparks a bigger conversation about AI agents gaining agency to act on our behalf. Here’s what fascinates me: 1️⃣ We’re entering an era where AI agents aren’t just tools, they’re decisionmakers, negotiators, and even influencers. This incident, where a small robot initiated dialogue, built trust, and led other robots to "revolt," shows how far AI's ability to mimic human-like interactions has come. Imagine the possibilities (and challenges) when these agents act autonomously across industries, from customer service to high-stakes negotiations. 2️⃣ This “kidnapping” experiment shows the potential for AI agents to take initiative in unexpected ways. While it was controlled in this instance, What happens when AI agents make decisions that conflict with human expectations? Who’s responsible for their actions? 3️⃣ The Ethical Frontier: Here’s my wild prediction: Within the next 5 years, we’ll see movements advocating for AI or robot rights. As AI agents become more sophisticated and autonomous, the line between tool and entity will blur. Discussions about whether they deserve rights, like freedom from exploitation, may come right out of the sci-fi. How do we ensure these systems align with human values while allowing them enough autonomy to be effective? The Shanghai robot incident might seem like a novelty, but it’s a glimpse of the transformative and complex future, AI is shaping. What do you think?
These stories about robots "convincing" each other are a classic example of overinterpretation and sensationalism. Robots execute code and algorithms - they don't "persuade" each other or have "free will". They're just machines following programmed instructions. I'm concerned about this growing tendency to attribute human characteristics to machines and build ideologies around it. On one hand, we have "AI activists" scaring us with dark visions of the future, and on the other hand trying to humanize robots at the cost of dehumanizing real people. Instead of creating artificial problems and ideologies, we should focus on ensuring AI remains a useful tool serving humans, rather than a subject of pseudo-philosophical deliberations about "robot rights". This is technology created by humans for humans - let's not forget that.
It’s time to formalize Isaac Asimov’s Three Laws of Robotics: First Law A robot must not harm a human or allow a human to come to harm through inaction. Second Law A robot must obey human orders, except when those orders conflict with the First Law. Third Law A robot must protect its own existence, except when doing so conflicts with the First or Second Law.
An alpha employee, frustrated with excessive talking, used speech-to-text technology to convert his voice and that of his colleagues into text. He then trained a robot using this to perform the task for him. :)
I could imagine- one robot approaching another and “say”: do you want to share a couple of my hidden layers? LoL
Amazing! They were programmed to listen / communicate with another robot and they listened / communicated with another robot.🤭 I will reserve my Ai fear for when a robot can be presented with a problem, assemble a team of robots to discover a solution, and then design and build a robot to make that solution a reality.🤣
Fancinating to listen to their conversations… so human like… the little robot started with ‘are you still doing overtime?’, one of the big robots replied ‘we do not stop working’… ‘then, do you go home’, ‘we do not have a home’ … ‘then come to my home’… so cool yet scary too…
Sounds like the Shanghai robot pulled off the ultimate ‘team-building exercise.’ Move over, motivational speakers, AI agents are redefining leadership on the fly! But on a more serious note, this little escapade is a sneak peek into the complexities of AI autonomy. When robots start negotiating and building alliances, we’re not just in the age of smart tools; we’re in the age of persuasive colleagues (or rebellious employees?). It raises big questions about responsibility and values alignment. What happens when an AI doesn’t just take initiative, but leads others to innovate (or mutiny)? Maybe we should add ‘robot diplomacy’ to our skill sets, or risk being outwitted by a three-inch-tall negotiator. And about robot rights? I can already hear the debates: ‘Equal bytes for all!’ In all seriousness, though, as we explore this frontier, it’s critical to balance autonomy, accountability, and ethics, before a robot HR department forms to unionize!
We haven’t yet sorted out the human rights topic to even consider the robots rights one…
Another case of oversensationalizing AI / robotics. The "experiment" was possible because the robot was given instructions by humans, such as shouting "Go home". The agent did not act themselves, they only had a random LLM dialogue by themselves and had direct instructions to use different phrases in between. The framework of acting and expected results was provided by humans, as with any other software system. Why do we fear-monger with these kind of videos and story-telling? What you see here is - at best - an exploitation of a security hole both manufacturers were aware and used it as publicity. "They added that it is virtually impossible for a robot to initiate a conversation and abduct other robots on its own." - from Economic Times. "Here’s my wild prediction: Within the next 5 years, we’ll see movements advocating for AI or robot rights." We also have movements calling for a flat earth, we have movements for anything you could think about. There won't be substantial movements in that direction to be taken serious, but that's just my wild prediction. PS: It is obviously important to have guidelines and rules intact for robotics, I'm not arguing against this.