Främling, K., Westberg, M., Jullum, M., Madhikermi, M. and Malhi, A., 2021. Comparison of Contextual Importance and Utility with LIME and Shapley Values. In: EXTRAAMAS 2021: Third International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, 3-7 May 2021, Virtual, 39 - 54.
Full text available as:
|
PDF
EXTRAAMAS_Ex_IJCAI_2021.pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial. 563kB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
DOI: 10.1007/978-3-030-82017-6_3
Abstract
Different explainable AI (XAI) methods are based on different notions of ‘ground truth’. In order to trust explanations of AI systems, the ground truth has to provide fidelity towards the actual behaviour of the AI system. An explanation that has poor fidelity towards the AI system’s actual behaviour can not be trusted no matter how convincing the explanations appear to be for the users. The Contextual Importance and Utility (CIU) method differs from currently popular outcome explanation methods such as Local Interpretable Model-agnostic Explanations (LIME) and Shapley values in several ways. Notably, CIU does not build any intermediate interpretable model like LIME, and it does not make any assumption regarding linearity or additivity of the feature importance. CIU also introduces the value utility notion and a definition of feature importance that is different from LIME and Shapley values. We argue that LIME and Shapley values actually estimate ‘influence’ (rather than ‘importance’), which combines importance and utility. The paper compares the three methods in terms of validity of their ground truth assumption and fidelity towards the underlying model through a series of benchmark tasks. The results confirm that LIME results tend not to be coherent nor stable. CIU and Shapley values give rather similar results when limiting explanations to ‘influence’. However, by separating ‘importance’ and ‘utility’ elements, CIU can provide more expressive and flexible explanations than LIME and Shapley values.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
ISSN: | 0302-9743 |
Uncontrolled Keywords: | Explainable AI; Contextual Importance and Utility; Outcome explanation; Post hoc explanation |
Group: | Faculty of Science & Technology |
ID Code: | 36356 |
Deposited By: | Symplectic RT2 |
Deposited On: | 13 Dec 2021 08:39 |
Last Modified: | 14 Mar 2022 14:31 |
Downloads
Downloads per month over past year
Repository Staff Only - |