Knapic, S., Malhi, A., Saluja, R. and Främling, K., 2021. Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain. Machine Learning and Knowledge Extraction (MAKE), 3 (3), 740 - 770.
Full text available as:
|
PDF (OPEN ACCESS ARTICLE)
make-03-00037-v2.pdf - Published Version Available under License Creative Commons Attribution. 4MB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
DOI: 10.3390/make3030037
Abstract
In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.
Item Type: | Article |
---|---|
ISSN: | 2504-4990 |
Additional Information: | This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI) |
Uncontrolled Keywords: | explainable artificial intelligence ; human decision support ; image recognition ; medical image analyses |
Group: | Faculty of Science & Technology |
ID Code: | 36314 |
Deposited By: | Symplectic RT2 |
Deposited On: | 29 Nov 2021 16:48 |
Last Modified: | 14 Mar 2022 14:30 |
Downloads
Downloads per month over past year
Repository Staff Only - |