Computing Reviews

Explaining explanations in AI
Mittelstadt B., Russell C., Wachter S.  FAT* 2019 (Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, Jan 29-31, 2019)279-288,2019.Type:Proceedings
Date Reviewed: 02/21/20

Everyone agrees that artificial intelligence (AI) should be explainable; there is even an abbreviation for this: xAI. But opinions differ on what it means. This paper is a survey of different approaches to xAI.

Ideally, AI systems should provide fully transparent recommendations, that is, recommendations that come from a sequence of clear, agreed-upon rules. In practice, this is rarely possible. Even when we can formulate such rules, the derivation is usually too long for a human to grasp. A usual alternative is to use, as an explanation, a derivation in a simplified easier-to-grasp model--just like simple approximate physical reasoning helps us understand the results of solving complex physical equations.

However, in physics, we usually understand how accurate an approximate model is and what the limits of its applicability are, while most xAI systems do not provide this information and thus tend to apply the simplified models even when they are not applicable. Also, the systems explain why a certain conclusion A was made; however, the user is also interested in contrastive explanations: why A and not B?

In general, users would like the systems to be interactive. They should be able to ask, for example, what can we do to change the recommendation to B? What is the evidence behind the rules? They should be able to argue when the recommendations and/or rules seem unfair. In view of these user needs, the paper surveys current attempts to design contrastive and interactive xAI.

Reviewer:  V. Kreinovich Review #: CR146901 (2008-0194)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy