Naiseh, M., 2021. C-XAI: Design Method for Explainable AI Interfaces to Enhance Trust Calibration. Doctoral Thesis (Doctoral). Bournemouth University.
Full text available as:
|
PDF
NAISEH, Mohammad_Ph.D._2021.pdf Available under License Creative Commons Attribution Non-commercial. 3MB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
Abstract
Human-AI collaborative decision-making tools are on an accelerated rise in several critical application domains, such as healthcare and military sectors. It is often difficult for users of such systems to understand the AI reasoning and output, particularly when the underlying algorithm and logic are hidden and treated as a black-box for commercial sensitivity and also for challenges of its explainability. Lack of explainability and opacity of the underlying algorithms can perpetuate justice and bias and decrease users’ acceptance and satisfaction. Integrating eXplainable AI (XAI) into AI-based decision-making tools has become a crucial requirement for a safe and effective human-AI collaborative environment. Recently, the impact of explainability on trust calibration has become a main research question. The role refers to how explanations and their communication method to help form a correct mental model of the AI-based tool; thus, the human decision-maker is better informed on whether to trust or distrust the AI recommendations. Although studies showed that explanations could improve trust calibration, such studies often assumed that users would engage cognitively with explanations to calibrate their trust. Recent studies showed that even though explanations are communicated to people, trust calibration is not improved. Such failure of XAI systems in enhancing trust calibration has been linked to factors such as cognitive biases, e.g., people can be selective of what they read and rely on. Also, other studies showed that XAI failed to improve calibrated trust due to the inconsistency in properties of XAI methods which are rarely considered in the XAI interfaces design. Overall, users of XAI systems fail, on average, to calibrate their trust, human decision-makers working collaboratively with an AI can still be notably following incorrect recommendations or rejecting correct ones. This thesis aims to provide C-XAI, a design method expressly tailored to help trust calibration in the XAI interface. The method identifies properties of XAI methods that may introduce trust calibration risks and help produce designs that mitigate these risks. Trust calibration risk is defined in this thesis as a limitation in the interface design that may hinder users’ ability to calibrate their trust. This thesis followed a qualitative research approach with experts, practitioners, and end-users who used AI-based decision-making tools in their work environment. The data collection methods included a literature review, semi-structured interviews, think-aloud sessions, and a co-design approach to develop C-XAI. These data collection methods helped conceptualise various aspects of trust calibration and XAI, including XAI requirements during Human-AI collaborative decision-making tasks, trust calibration risks, and design principles that help trust calibration. The results of these studies were exploited to devise C-XAI. The C-XAI then was evaluated with domain experts and end-users. The evaluation aimed to investigate the effectiveness, completeness, clarity, engagement, and communication between different stakeholders. The evaluation results showed that the method helped stakeholders understand the design problem and develop XAI designs to help trust calibration. This thesis has four main contributions. First, it conceptualises the trust calibration design problem concerning XAI interface design. Second, it elicits main limitations for XAI interfaces design to support trust calibration. Third, it proposes key design principles that support XAI interface designers to support trust calibration. Finally, the thesis proposes and evaluates the C-XAI design method to guide XAI interface design to enable trust calibration systematically.
Item Type: | Thesis (Doctoral) |
---|---|
Additional Information: | If you feel that this work infringes your copyright please contact the BURO Manager. |
Uncontrolled Keywords: | explainable AI; trust calibration; user-centred design |
Group: | Faculty of Science & Technology |
ID Code: | 36345 |
Deposited By: | Symplectic RT2 |
Deposited On: | 08 Dec 2021 10:21 |
Last Modified: | 14 Mar 2022 14:31 |
Downloads
Downloads per month over past year
Repository Staff Only - |