Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
The challenge of crafting intelligible intelligence
Weld D., Bansal G. Communications of the ACM62 (6):70-79,2019.Type:Article
Date Reviewed: Aug 9 2019

In the past decade, many algorithms and techniques have been classified as computational intelligence, machine learning, cognitive informatics, or data science, so much so that those approaches now fall under the notion of artificial intelligence (AI)--I think primarily for marketing reason. These methods have proved to be successful in several application areas. However, questions remain, namely how to understand and interpret the underlying line of reasoning and how to make sense of the inference for end users.

Intelligibility is a technical term in this context, that is, the understandability and interpretability of the results of the executed algorithms; the authors provide a definition. It seems to be a very difficult task to achieve intelligibility, as it requires interdisciplinary approaches such as man-machine information exchange, AI, and model design competence for machine learning methods.

The article presents an interesting case study on application of generalized additive models (GAM and GA2M). The GAM linear model, under specific circumstances, is comparable in performance to inscrutable (that is, difficult to understand) machine learning models; moreover, linear models offer explanation capability, also known as intelligibility. However, the provided medical example emphasizes the importance of modeling and understanding the model through exploiting the method’s ability for counterfactual analysis. (The apparent problem in the case is whether asthma really decreases the risk of dying from pneumonia, as the model might predict.)

The paper proposes two major approaches: (1) the application of a contracting operator to represent a simple model for explanation, and (2) building up an interactive explanation model concerning the actual audience. The authors also pinpoint the problem area of the deep look-ahead search that demands a similar explanation capability as machine learning algorithms. The paper concludes that interactive explanation systems might offer a solution by taking into account the results of experimental psychology and other interdisciplinary approaches.

This very interesting paper on the hype of AI provides some clues for developing more user-friendly systems in that area.

Reviewer:  Bálint Molnár Review #: CR146648 (1911-0399)
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
General (I.2.0 )
 
 
Learning (I.2.6 )
 
Would you recommend this review?
yes
no
Other reviews under "General": Date
Artificial experts: social knowledge and intelligent machines
Collins H., MIT Press, Cambridge, MA, 1990. Type: Book (9780262031684)
Apr 1 1991
Catalogue of artificial intelligence techniques
Bundy A., Springer-Verlag New York, Inc., New York, NY, 1990. Type: Book (9780387529592)
Aug 1 1991
Knowledge and inference
Nagao M., Academic Press Prof., Inc., San Diego, CA, 1990. Type: Book (9780125136624)
Oct 1 1991
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy