Bridging the human-automation fairness gap : how providing reasons enhances the perceived fairness of public decision-making
Arian Henning, Pascal Langenbach
Automated decision-making in legal contexts is often perceived as less fair than its human counterpart. This human-automation fairness gap poses practical challenges for implementing automated systems in the public sector. Drawing on experimental data from 4,250 participants in three public decision-making scenarios, this study examines how different reasoning models influence the perceived fairness of automated and human decision-making. The results show that providing reasons enhances the perceived fairness of decision-making, regardless of whether decisions are made by humans or machines. Moreover, the study demonstrates that sufficiently individualized reasoning largely mitigates the human-automation fairness gap. The study thus contributes to the understanding of how procedural elements like giving reasons for decisions shape perceptions of automated government and suggests that well-designed reason giving can improve the acceptability of automated decision systems.