Computing Reviews

The challenge of crafting intelligible intelligence
Weld D., Bansal G. Communications of the ACM62(6):70-79,2019.Type:Article
Date Reviewed: 08/09/19

In the past decade, many algorithms and techniques have been classified as computational intelligence, machine learning, cognitive informatics, or data science, so much so that those approaches now fall under the notion of artificial intelligence (AI)--I think primarily for marketing reason. These methods have proved to be successful in several application areas. However, questions remain, namely how to understand and interpret the underlying line of reasoning and how to make sense of the inference for end users.

Intelligibility is a technical term in this context, that is, the understandability and interpretability of the results of the executed algorithms; the authors provide a definition. It seems to be a very difficult task to achieve intelligibility, as it requires interdisciplinary approaches such as man-machine information exchange, AI, and model design competence for machine learning methods.

The article presents an interesting case study on application of generalized additive models (GAM and GA2M). The GAM linear model, under specific circumstances, is comparable in performance to inscrutable (that is, difficult to understand) machine learning models; moreover, linear models offer explanation capability, also known as intelligibility. However, the provided medical example emphasizes the importance of modeling and understanding the model through exploiting the method’s ability for counterfactual analysis. (The apparent problem in the case is whether asthma really decreases the risk of dying from pneumonia, as the model might predict.)

The paper proposes two major approaches: (1) the application of a contracting operator to represent a simple model for explanation, and (2) building up an interactive explanation model concerning the actual audience. The authors also pinpoint the problem area of the deep look-ahead search that demands a similar explanation capability as machine learning algorithms. The paper concludes that interactive explanation systems might offer a solution by taking into account the results of experimental psychology and other interdisciplinary approaches.

This very interesting paper on the hype of AI provides some clues for developing more user-friendly systems in that area.

Reviewer:  Bálint Molnár Review #: CR146648 (1911-0399)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy