Skip to main content

Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks.

Naiseh, M., Jiang, N., Ma, J. and Ali, R., 2020. Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks. In: RCIS 2020: Research Challenges in Information Science, 23-25 September 2020, Limassol, Cyprus, 212 - 228.

Full text available as:

[img]
Preview
PDF
Explainable_Recommendations_in_IntelligentSystems__Delivery_Methods__Modalities_andRisks29032020.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial.

182kB

DOI: 10.1007/978-3-030-50316-1_13

Abstract

With the increase in data volume, velocity and types, intelligent human-agent systems have become popular and adopted in different application domains, including critical and sensitive areas such as health and security. Humans’ trust, their consent and receptiveness to recommendations are the main requirement for the success of such services. Recently, the demand on explaining the recommendations to humans has increased both from humans interacting with these systems so that they make an informed decision and, also, owners and systems managers to increase transparency and consequently trust and users’ retention. Existing systematic reviews in the area of explainable recommendations focused on the goal of providing explanations, their presentation and informational content. In this paper, we review the literature with a focus on two user experience facets of explanations; delivery methods and modalities. We then focus on the risks of explanation both on user experience and their decision making. Our review revealed that explanations delivery to end-users is mostly designed to be along with the recommendation in a push and pull styles while archiving explanations for later accountability and traceability is still limited. We also found that the emphasis was mainly on the benefits of recommendations while risks and potential concerns, such as over-reliance on machines, is still a new area to explore.

Item Type:Conference or Workshop Item (Paper)
ISSN:1865-1348
Uncontrolled Keywords:Explainable Recommendations, Human Factors in Information Systems, User-Centred Design, Explainable Artificial Intelligence
Group:Faculty of Science & Technology
ID Code:34312
Deposited By: Unnamed user with email symplectic@symplectic
Deposited On:21 Jul 2020 10:47
Last Modified:26 Sep 2020 01:08

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -