Page 14 - Read Online
P. 14

Page 10 of 12                Sathyan et al. Complex Eng Syst 2022;2:18  I http://dx.doi.org/10.20517/ces.2022.41





























                            Figure 7. SHAP dependency plot for concave points (mean). SHAP: Shapley Additive exPlanations.


               networks could be more complicated to use as inputs to LIME and SHAP. However, they can still be a valuable
               tool for obtaining explanations for applications that do not require real-time explanations or those that only
               require explanations during certain instances.




               5. CONCLUSIONS AND FUTURE WORK
               Inthis paper, wepresented theuse of twoexplainability tools, namely LIME andSHAP, to explain thedecisions
               madebyatrainedDNNmodel. WeusedthepopularBreastCancerWisconsindatasetfromtheUCIrepository
               as the use case for our work. We presented the trends obtained using LIME and SHAP on the predictions made
               by the trained models. The LIME outputs were shown for individual data points from the test data. On the
               other hand, SHAP was used to present a summary plot that showed a holistic view of the effect of the different
               featuresonthemodelpredictionsacrosstheentiretestdataset. Additionally, thepaperalsopresentedcommon
               trends between the analysis results from both LIME and SHAP.


               For future work, we plan to use these tools for other datasets, especially those with more than two output
               classes. It will be interesting to see how the results from LIME and SHAP analysis can help gain insights into
               datasets with a larger number of classes. The results from this paper are very encouraging to the research
               efforts on advancing explainability to deep learning based machine learning models. We also plan to make use
               of the abstract features derived within the DNN as possible input to LIME and SHAP. This may also help to
               understand the relevance of abstract features and may be useful for other aspects of machine learning, such as
               transfer learning.


               5.1. Note
               The Python code is available at this GitHub repository: https://github.com/sathyaa3p/xaiBreastCancer



               DECLARATIONS
               Authors’ contributions
               Made substantial contributions to conception and design of the study: Sathyan A, Weinberg AI, Cohen K
               Training the models and interpretation of results: Sathyan A, Weinberg AI
   9   10   11   12   13   14   15   16   17   18   19