PI: Tae Wan Kim, Associate Professor of Business Ethics, Tepper School of Business

Co PIs: David Danks, Department Head & L.L. Thurstone Professor of Philosophy and Psychology, Tepper School of Business, Dokyun Lee, Assistant Professor of Business Analytics, Tepper School of Business, Joy Lu, Assistant Professor of Marketing, Tepper School of Business

We have received funding from the Carnegie Bosch Institute for Explanations, Trust, & AI. Classification and recommendation AIs or algorithms provide predictions to an expert user, such as a medical doctor (“Cancer”), lawyer (“Guilty”), or investment banker (“Buy”). These AI systems give the most probable classification, but rarely with a rationale or explanation. The lack of justification by the algorithm can lead to a significant loss of trust by the expert users, as well as those who are impacted by the subsequent actions (e.g., patients, clients, or investors). The critical need for explanations and justifications by AI systems has led to calls for algorithmic transparency, including the EU General Data Protection Regulation (GDPR) that requires many companies to provide a "meaningful" explanation to involved parties (e.g., users, customers, or employees).

These calls or demands presuppose that we know what counts as a meaningful explanation, but there has been surprisingly little detailed research on this question in the context of AI systems. In this project, we develop a philosophically rigorous and objectively evaluative definition of a good explanation, grounded in our best scientific understanding of human cognition. In particular, we will develop a concrete "graspability test" to determine features of explanations that are optimal for users' understanding and trust-building in a domain. This general test will be applicable in a range of domains, thereby enabling diverse AI developers and deployers to field systems that promote trust through useful explanations.