Chapter 3. Linking trust constructs with computer science-based decision-making systems
In this chapter, we describe an interdisciplinary research agenda for creating trust-sensitive adaptive human-agent teaming systems. We introduce the partially observable Markov decision process (POMDP) as a primary example of a decision-making framework well suited to power these systems. Building on this framework and existing literature, we outline how cutting-edge neurophysiological sensors can unobtrusively measure trust to inform an agent’s actions, allowing it to properly calibrate human trust to match the agent’s capability. This trust calibration would allow human teammates to offload tasks onto the agent when appropriate, maximize the team’s efficiency and performance. We conclude by identifying several directions towards realizing the full potential of trust-sensitive HATs and building the adaptive AI systems of the future.