Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Best of 2016 Recommended by Editor Recommended by Reviewer Recommended by Reader
Search
Explainability is not a game
Marques-Silva J., Huang X. Communications of the ACM67 (7):66-75,2024.Type:Article
Date Reviewed: Dec 27 2024

The increasing application of artificial intelligence (AI) in real-life decisions--from birth to death, from economic survival to irrecoverable debts by nations, from peace to war times and military manpower mobilization [1]--requires highly reliable machine learning (ML) algorithms. But how should creators of future robust ML models guarantee reliable trusts in complicated AI applications for human life-saving decisions? In this mathematical research, Marques-Silva and Huang present compelling evidence that advocates new ML models for building trustworthy AI applications, which can provide explanations for all decisions.

The authors succinctly review and critique existing ML models’ efforts to formally or informally explain decisions in AI applications. Indeed, formal ML logic models offer ways to compute reasons behind AI decisions at the expense of computational complexities. Also, informal logic models, predicated on the application of the Shapley technique to assign attribute scores in many AI domains, raise questions of the accuracy of estimated values. Consequently, the authors investigate the reliability of inferences based on the Shapley values assigned to attributes for decision-making.

The article presents decision trees, predicate logic, and equations for exploring the accuracy of decisions based on the use of Shapley values. The tabulated results reveal discrepancy information for human decisions. The examples and illustrations show how the current definitions of Shapley values can result in erroneous decisions due to inaccurate representations of the relevance of features in crucial AI applications.

Futuristic practitioners of ML models ought to read the article’s alarming results on the use of Shapley values in great risk and critical safety AI applications. Fortunately, however, there are reliable algorithms in the literature [1] that attempt to resolve the issues raised in this article.

Reviewer:  Amos Olagunju Review #: CR147862
1) Urbanowicz, R. J. Rule-based machine learning classification and knowledge discovery for complex problems. ACM SIGEVOlution 7, (2015), 3–11.
Bookmark and Share
  Reviewer Selected
Editor Recommended
Featured Reviewer
 
 
General (I.2.0 )
 
 
General (F.3.0 )
 
 
Artificial Intelligence (I.2 )
 
 
General (F.0 )
 
 
General (I.0 )
 
 
Computing Methodologies (I )
 
  more  
Would you recommend this review?
yes
no
Other reviews under "General": Date
Artificial experts: social knowledge and intelligent machines
Collins H., MIT Press, Cambridge, MA, 1990. Type: Book (9780262031684)
Apr 1 1991
Catalogue of artificial intelligence techniques
Bundy A., Springer-Verlag New York, Inc., New York, NY, 1990. Type: Book (9780387529592)
Aug 1 1991
Knowledge and inference
Nagao M., Academic Press Prof., Inc., San Diego, CA, 1990. Type: Book (9780125136624)
Oct 1 1991
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2025 ThinkLoud®
Terms of Use
| Privacy Policy