![](/images/misc/invis.gif)
The increasing application of artificial intelligence (AI) in real-life decisions--from birth to death, from economic survival to irrecoverable debts by nations, from peace to war times and military manpower mobilization [1]--requires highly reliable machine learning (ML) algorithms. But how should creators of future robust ML models guarantee reliable trusts in complicated AI applications for human life-saving decisions? In this mathematical research, Marques-Silva and Huang present compelling evidence that advocates new ML models for building trustworthy AI applications, which can provide explanations for all decisions.
The authors succinctly review and critique existing ML models’ efforts to formally or informally explain decisions in AI applications. Indeed, formal ML logic models offer ways to compute reasons behind AI decisions at the expense of computational complexities. Also, informal logic models, predicated on the application of the Shapley technique to assign attribute scores in many AI domains, raise questions of the accuracy of estimated values. Consequently, the authors investigate the reliability of inferences based on the Shapley values assigned to attributes for decision-making.
The article presents decision trees, predicate logic, and equations for exploring the accuracy of decisions based on the use of Shapley values. The tabulated results reveal discrepancy information for human decisions. The examples and illustrations show how the current definitions of Shapley values can result in erroneous decisions due to inaccurate representations of the relevance of features in crucial AI applications.
Futuristic practitioners of ML models ought to read the article’s alarming results on the use of Shapley values in great risk and critical safety AI applications. Fortunately, however, there are reliable algorithms in the literature [1] that attempt to resolve the issues raised in this article.