
Artificial Intelligence, News, Technology
Vulnerable Robots Encourage Better Human Conversations
Robots willing to admit mistakes inspire better conversations in human teammates
“Sorry, guys, I made the mistake this round,” these are the words of a robot teammate acknowledging a mistake. A recent study shows how this simple act of robot vulnerability can spur human teammates into more equal and enjoyable interactions.
Researchers at Yale University in the US conducted a study examining the way robots influence how humans interact with each other. The study was inspired by concerns over an inevitable future where robots will become a common part of our society.
According to one of the authors, Yale sociology PhD candidate Margaret L. Traeger, society needs to understand how the presence of robots will impact our interpersonal relationships. “Our study shows that robots can affect human-to-human interactions,” Traeger says in a press release on March 9.
The study involved dividing 153 people into teams comprising of one robot and 3 humans. The human and robot team then worked together to play 30 rounds of a game on a tablet device. Teams played under one of three conditions: silent robot, neutral robot, or vulnerable robot. All robots in the groups made mistakes but whilst the silent robot never verbally interacts, the neutral robot might state the score or other task-related fact, and the vulnerable robot expresses humour, tells personal stories, and acknowledges mistakes.
The researchers found that the humans playing with the vulnerable robot reported greater enjoyment and conversed with each other roughly twice as much as the other groups. The study also showed that conversation between the humans was more evenly distributed in the teams with the vulnerable robot than in the groups with the neutral or silent robot. Traeger argues that, “we show that robots can help people communicate more effectively as a team.”
Co-author Nicholas A. Christakis, Sterling Professor of Social and Natural Science at Yale, adds that, “As we create hybrid social systems of humans and machines, we need to evaluate how to program the robotic agents so that they do not corrode how we treat each other.”
The study acknowledges the dual-use nature of artificial intelligence (AI). In reality, robots could be programmed in anti-social ways to spread propaganda or cause autonomous car crashes. But just as easily, they could be programmed for pro-social activities like improving collaboration in teams.
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.

Our UX team designs customer experiences and digital products that your users will love.
Follow Us