Naiseh, M., Jiang, N., Ma, J. and Ali, R., 2020. Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks. In: RCIS 2020: Research Challenges in Information Science, 23-25 September 2020, Limassol, Cyprus, 212 - 228.
Full text available as:
|
PDF
Explainable_Recommendations_in_IntelligentSystems__Delivery_Methods__Modalities_andRisks29032020.pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial. 182kB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
DOI: 10.1007/978-3-030-50316-1_13
Abstract
With the increase in data volume, velocity and types, intelligent human-agent systems have become popular and adopted in different application domains, including critical and sensitive areas such as health and security. Humans’ trust, their consent and receptiveness to recommendations are the main requirement for the success of such services. Recently, the demand on explaining the recommendations to humans has increased both from humans interacting with these systems so that they make an informed decision and, also, owners and systems managers to increase transparency and consequently trust and users’ retention. Existing systematic reviews in the area of explainable recommendations focused on the goal of providing explanations, their presentation and informational content. In this paper, we review the literature with a focus on two user experience facets of explanations; delivery methods and modalities. We then focus on the risks of explanation both on user experience and their decision making. Our review revealed that explanations delivery to end-users is mostly designed to be along with the recommendation in a push and pull styles while archiving explanations for later accountability and traceability is still limited. We also found that the emphasis was mainly on the benefits of recommendations while risks and potential concerns, such as over-reliance on machines, is still a new area to explore.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
ISSN: | 1865-1348 |
Uncontrolled Keywords: | Explainable Recommendations, Human Factors in Information Systems, User-Centred Design, Explainable Artificial Intelligence |
Group: | Faculty of Science & Technology |
ID Code: | 34312 |
Deposited By: | Symplectic RT2 |
Deposited On: | 21 Jul 2020 10:47 |
Last Modified: | 14 Mar 2022 14:23 |
Downloads
Downloads per month over past year
Repository Staff Only - |