Page 5 - Read Online
P. 5

Sathyan et al. Complex Eng Syst 2022;2:18                     Complex Engineering
               DOI: 10.20517/ces.2022.41                                                      Systems



               Research Article                                                              Open Access





               Interpretable AI for bio-medical applications



                            1
                                                    2
               Anoop Sathyan , Abraham Itzhak Weinberg , Kelly Cohen 1
               1 Department of Aerospace Engineering, University of Cincinnati, Cincinnati, OH 45231, USA.
               2 Department of Management, Bar-Ilan University, Ramat Gan 5290002, Israel.


               Correspondence to: Dr. Anoop Sathyan, Department of Aerospace Engineering, University of Cincinnati, Cincinnati, OH 45231,
               USA. E-mail: sathyaap@ucmail.uc.edu; ORCID: 0000-0003-2414-9515
               Howtocitethisarticle: Sathyan A, Weinberg AI, Cohen K. Interpretable AI for bio-medical applications. ComplexEng Syst 2022;2:18.
               http://dx.doi.org/10.20517/ces.2022.41

               Received: 11 Oct 2022 First Decision: 24 Nov 2022 Revised: 9 Dec 2022 Accepted: 19 Dec 2022 Published: 28 Dec 2022
               Academic Editor: Hamid Reza Karimi Copy Editor: Fanglin Lan  Production Editor: Fanglin Lan




               Abstract
               This paper presents the use of two popular explainability tools called Local Interpretable Model-Agnostic Explana-
               tions (LIME) and Shapley Additive exPlanations (SHAP) to explain the predictions made by a trained deep neural
               network. The deep neural network used in this work is trained on the UCI Breast Cancer Wisconsin dataset. The neu-
               ral network is used to classify the masses found in patients as benign or malignant based on 30 features that describe
               the mass. LIME and SHAP are then used to explain the individual predictions made by the trained neural network
               model. The explanations provide further insights into the relationship between the input features and the predictions.
               SHAP methodology additionally provides a more holistic view of the effect of the inputs on the output predictions.
               The results also present the commonalities between the insights gained using LIME and SHAP. Although this paper
               focuses on the use of deep neural networks trained on UCI Breast Cancer Wisconsin dataset, the methodology can be
               applied to other neural networks and architectures trained on other applications. The deep neural network trained in
               this work provides a high level of accuracy. Analyzing the model using LIME and SHAP adds the much desired benefit
               of providing explanations for the recommendations made by the trained model.



               Keywords: Explainable AI, LIME, SHAP, neural networks






                           © The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0
                           International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, shar-
                ing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you
                give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate
                if changes were made.



                                                                                          www.comengsys.com
   1   2   3   4   5   6   7   8   9   10