ABSTRACT
From healthcare to criminal justice, artificial intelligence (AI) is increasingly supporting high-consequence human decisions. This has spurred the field of explainable AI (XAI). This paper seeks to strengthen empirical application-specific investigations of XAI by exploring theoretical underpinnings of human decision making, drawing from the fields of philosophy and psychology. In this paper, we propose a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across these fields. Drawing on this framework, we identify pathways along which human cognitive patterns drive needs for building XAI and how XAI can mitigate common cognitive biases. We then put this framework into practice by designing and implementing an explainable clinical diagnostic tool for intensive care phenotyping and conducting a co-design exercise with clinicians. Thereafter, we draw insights into how this framework bridges algorithm-generated explanations and human decision-making theories. Finally, we discuss implications for XAI design and development.
CCS CONCEPTS
• Human-centered computing ~ Human-computer interaction
KEYWORDS
Intelligibility, Explanations, Explainable artificial intelligence, Clinical decision making, Decision making
INTRODUCTION
From supporting healthcare intervention decisions to informing criminal justice, artificial intelligence (AI) is now increasingly entering the mainstream and supporting high-consequence human decisions. However, the effectiveness of these systems will be limited by the machine’s inability to explain its thoughts and actions to human users in these critical situations. These challenges have spurred research interest in explainable AI (XAI) [2, 12, 32, 43, 109]. To enable end-users to understand, trust, and effectively manage their intelligent partners, HCI and AI researchers have produced many user-centered, innovative algorithm visualizations, interfaces and toolkits (e.g., [18, 56, 67, 86] that support users with various levels of AI literacy in diverse subject domains, from the bank customer who is refused a loan, the doctor making a diagnosis with a decision aid, to the patient who learns that he may have skin cancer from a smartphone photograph of his mole [30].
Adding on to this line of inquiry, this paper seeks to strengthen empirical application-specific investigations of XAI by exploring theoretical underpinnings of human decision making, drawing from the fields of philosophy and psychology. We first conducted an extensive literature review in cognitive psychology, philosophy, and decisionmaking theories that describe patterns of how people reason, make decisions, and seek explanations, and cognitive factors that bias or compromise decision-making.
We drew connections between these insights and explanation facilities that AI algorithms commonly produce, and in turn proposed a theory-driven, user-centric XAI framework (Figure 1). With this framework, XAI researchers and designers can identify pathways along which human cognitive patterns drive needs for building XAI and how XAI can mitigate common cognitive biases. Next, to evaluate this framework by putting it to work, we applied it to a real-world clinical machine learning (ML) use case, i.e., an explainable diagnostic tool for intensive care phenotyping. Co-designing with 14 clinicians, we developed five explanation strategies to mitigate decision biases and moderate trust. We implemented the system with XGBoost [17] trained on the MIMIC III data [45]. Drawing on this application, we reflect on the utility and limitations of the framework and share lessons learned. Our contributions are:
1. A theory-driven conceptual framework linking different XAI explanation facilities to user reasoning goals that provide pathways to mitigate reasoning failures due to cognitive biases.
2. An application of our framework to medical decision making to demonstrate its usefulness in designing user-centric XAI.
3. Discussion to generalize our framework to other applications. The key takeaway of the framework is to choose explanations backed by reasoning theories and cognitive biases rather than based on taxonomy (e.g., [32, 38, 66, 90, 93]) or popular XAI techniques (e.g. [75, 86]). This aims to help developers build human-centric explainable AI-based systems with targeted XAI features.
CONCLUSION
We have described a theory-driven conceptual framework for designing explainable facilities by drawing from philosophy, cognitive psychology, and artificial intelligence to develop user-centric explainable AI (XAI). Using this framework, we can identify pathways for how specific explanations can be useful, how certain reasoning methods fail due to cognitive biases, and how to apply different elements of XAI to mitigate these failures. By articulating a detailed design space of technical features of XAI and connecting them with requirements of human reasoning, we aim to help developers build more user-centric explainable AI-based systems.
About KSRA
The Kavian Scientific Research Association (KSRA) is a non-profit research organization to provide research / educational services in December 2013. The members of the community had formed a virtual group on the Viber social network. The core of the Kavian Scientific Association was formed with these members as founders. These individuals, led by Professor Siavosh Kaviani, decided to launch a scientific / research association with an emphasis on education.
KSRA research association, as a non-profit research firm, is committed to providing research services in the field of knowledge. The main beneficiaries of this association are public or private knowledge-based companies, students, researchers, researchers, professors, universities, and industrial and semi-industrial centers around the world.
Our main services Based on Education for all Spectrum people in the world. We want to make an integration between researches and educations. We believe education is the main right of Human beings. So our services should be concentrated on inclusive education.
The KSRA team partners with local under-served communities around the world to improve the access to and quality of knowledge based on education, amplify and augment learning programs where they exist, and create new opportunities for e-learning where traditional education systems are lacking or non-existent.
FULL Paper PDF file:
Designing Theory-Driven User-Centric Explainable AI
Bibliography
author
Year
2019
Title
Designing Theory-Driven User-Centric Explainable AI
Publish in
Doi
Paper No.: 601 Pages 1–15 https://doi.org/10.1145/3290605.3300831
PDF reference and original file: Click here
Nasim Gazerani was born in 1983 in Arak. She holds a Master's degree in Software Engineering from UM University of Malaysia.
-
Nasim Gazeranihttps://ksra.eu/author/nasim/
-
Nasim Gazeranihttps://ksra.eu/author/nasim/
-
Nasim Gazeranihttps://ksra.eu/author/nasim/
-
Nasim Gazeranihttps://ksra.eu/author/nasim/