Page 7 - Read Online
P. 7
Sathyan et al. Complex Eng Syst 2022;2:18 I http://dx.doi.org/10.20517/ces.2022.41 Page 3 of 12
trustworthy just because all system requirements have been addressed is not enough to guarantee widespread
adoption of AI. Moreover, according to NIST, “It is the user, the human affected by the AI, who ultimately
places their trust in the system,” and furthermore, “alongside research toward building trustworthy systems,
understanding user trust in AI will be necessary to minimize the risks of this new technology and realize its
benefits.
In June 2022, Kathleen Hicks, Deputy Secretary of Defense, released a report that clarifies the DoD perspective
concerningtrustinAIsystemsasfollows: “TheDepartment’sdesiredendstateforResponsibleAI(RAI)istrust.
Trust in DoD AI will allow the Department to modernize its warfighting capability across a range of combat
and non-combat applications, considering the needs of those internal and external to the DoD. Without trust,
warfighters and leaders will not employ Al effectively and the American people will not support the continued
use and adoption of such technology” [12] . This paradigm shift in policy will have a major impact on the
continued development and fielding of AI systems for DoD and for the safety critical systems in the civilian
arenas such as health, energy transportation etc.
In line with DoD’s perspectives on trust in AI, it is important that users of AI models be able to assess the
model, its decisions and predictions by their ability to understand it. In addition, for better understanding,
the users would like to get answers to questions such as what needs to be done to change the model or its
prediction. This is one of the motivations for the rapid growth in popularity of the paradigm called XAI. The
interaction between machine learning models and their users has become one of the crucial points in usage
and implementation of AI systems. Many emerging algorithms try to solve this human-machine interaction
by providing a meaningful explanation for the model.
There are ways to classify the XAI approaches by several criteria [13] such as: model dependency, sample partic-
ularity, explainability timing and the interaction between the explanation to the model itself. More specifically,
independence of the explainability of the model itself is called model agnostics. The explanation of the entire
model is called global explainability, while explaining a particular sample is called local explainability. The
position of the explainability process in model life cycle determines whether the explainability is pre-model,
in-model or post-model.
This paper uses two popular approaches for XAI: LIME [14,15] and SHAP [16] . Both are attribution-based ex-
planation models. Attribution-based explanation models find and quantify the most contributed features on
model predictions. In addition, both models are relatively easy to use, and their results can be plotted and
easily interpreted. LIME and SHAP in our case are used as Post-hoc models, locally interpretable and model
agnostic. Although both LIME and SHAP explain the predictions made by the trained model, they use dif-
ferent approaches. SHAP relies on Shapley values for finding the best contributing features [16] , while LIME
explains the model decision in a local region around a particular sample [14] . Each approach has its own ben-
efits. Using both approaches supports the explainability level of our deep learning model. Using both LIME
and SHAP allows us to compare the insights gained using the two tools. Additionally, since the two tools
work independently of each other, the commonalities between the insights gained can be used to gain a better
understanding of the trained model as well as how the different features play a role in the diagnosis/prediction.
2. XAI FOR HEALTHCARE
The implementation of XAI for increasing trustworthiness can also be found in biomedical studies such as
drug-drug interactions prediction [17] as well as classification of protein complexes from sequence informa-
tion [18] . In our case, we use the XAI for the interpretability of breast cancer predictions. The combination of
[2]
the two has a fast-growing demand . The benefits of implementing XAI in medical fields provide opportu-
[2]
nity for prevention and better treatment . The XAI helps clinicians in the diagnostic process as well as their