Trust and Deception: The Role of Apologies in Human-Robot Interactions
Robot deception and humanity
Robot deception is something that has never been widely exposed publicly, especially when it comes to trust rebuilding after a robot has been caught lying. The scholars attempt to find out also about the effectiveness of apologies and gaining trust afterwards.
Scholars say that prior work demonstrated that should a robot be caught lying, obtaining trust becomes a problem. The scholars now try to see what types of apologies are more efficient and regain trust in human-robot interaction.
How can AI help?
Driving simulation was created to learn about human - AI interaction in risky and time-sensitive situations. Three hundred forty one online participants have been recruited as well as 20 in-person study participants. The scenario implied the presence of an AI-based driving scenario with AI providing improper data about the existence of police on the way to a hospital. Then AI provided one one of five programmed responses, which included different apologies.
The result showed that the study participants were 3.5 times more likely not to violate the speed limit, when guided by the robot assistance. Apology never restored trust fully, but plain excuses without confirmation of lies “I’m sorry” surpassed other replies. This discovery is an issue because, it is considered by humans, that any false data, given by a computer is a system mistake, but with no intention. For the machine to regain human trust under those circumstances, it is important to explain to a human why it lied. People and interested parties (policy makers, etc) should understand, the AI can lie and work out measures to protect public. It is important to design robots that can be taught when to tell lies and when not to as well as apologize when human-AI interaction is taking place to enhance the team’s efficiency.
AI Catalog's chief editor