Skip to main content

Explainable recommendations and calibrated trust: two systematic users’ errors.

Naiseh, M., Cemiloglu, D., Jiang, N., Althani, D. and Ali, R., 2021. Explainable recommendations and calibrated trust: two systematic users’ errors. Computer, 54 (10), 28-37.

Full text available as:

Final version- Special Issue paper.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial.


Official URL:

DOI: 10.1109/MC.2021.3076131


The increased adoption of collaborative Human-AI decision-making tools triggered a need to explain the recommendations for safe and effective collaboration. However, evidence from the recent literature showed that current implementation of AI explanations is failing to achieve adequate trust calibration. Such failure has lead decision-makers to either end-up with over-trust, e.g., people follow incorrect recommendations or under-trust, they reject a correct recommendation. In this paper, we explore how users interact with explanations and why trust calibration errors occur. We take clinical decision-support systems as a case study. Our empirical investigation is based on think-aloud protocol and observations, supported by scenarios and decision-making exercise utilizing a set of explainable recommendations interfaces. Our study involved 16 participants from medical domain who use clinical decision support systems frequently. Our findings showed that participants had two systematic errors while interacting with the explanations either by skipping them or misapplying them in their task.

Item Type:Article
Uncontrolled Keywords:Systematics, Decision Making, Collaboration, Tools
Group:Faculty of Science & Technology
ID Code:35465
Deposited By: Symplectic RT2
Deposited On:10 May 2021 10:36
Last Modified:14 Mar 2022 14:27


Downloads per month over past year

More statistics for this item...
Repository Staff Only -