Automation Bias with a Conversational Interface: User confirmation of misparsed information

Abstract We investigate automation bias for confirming erroneous information with a conversational interface. Participants in our studies used a conversational interface to report information in a simulated intelligence, surveillance, and reconnaissance (ISR) task. In the task, for flexibility and ease of use, participants reported information to the conversational agent in natural language. Then, the conversational agent interpreted the user's reports in a human- and machine-readable language. Next, participants could accept or reject the agent's interpretation. Misparses occur when the agent incorrectly interprets the report and the user erroneously accepts it. We hypothesize that the misparses naturally occur in the experiment due to automation bias and complacency because the agent interpretation was generally correct (92%). These errors indicate some users were unable to maintain situation awareness using the conversational interface. Our results illustrate concerns for implementing a flexible conversational interface in safety critical environments (e.g., military, emergency operations).
Authors
  • Erin Zaroukian (ARL)
  • Jon Bakdash (ARL)
  • Alun Preece (Cardiff)
  • Will Webberley (Cardiff)
Date Mar-2017
Venue IEEE Conference on Cognitive and Computational Aspects of Situation Management, 2017