Page 6 - Read Online
P. 6

Page 2 of 12                 Sathyan et al. Complex Eng Syst 2022;2:18  I http://dx.doi.org/10.20517/ces.2022.41



               1. INTRODUCTION
               Inrecentyears, wehavewitnessedgrowthintheusageandimplementationofmachinelearningbaseddecision
                                                                                              [1]
               making and predictive analytics. Practically speaking, machine learning models are ubiquitous . One of the
               reasons for this growth is the contribution of machine learning to their users and decision makers. In recent
               times, there has been a rise in the development of new computational infrastructures such as cloud storage
                                    [2]
               and parallel computation , which has contributed to faster training of the models. Many papers contribute
               to the effort of developing machine learning models that excel in metrics such as accuracy, efficiency and
               running time. The more complex models are usually more accurate [3,4] . However, the ability of humans
                                                                    [5]
               to understand it is negatively correlated to model complexity . One of the challenges to eXplainable AI
               (XAI) is its implementation in real-life applications. XAI has inherent challenges such as lack of expertise,
               inherently biased choices, lack of resiliency for data changes, algorithms and problems interference challenges,
                                                                                                        [6]
               local context dependency of the explanations and lack of causality of explanations between input and output .
               These challenges intensify for clinical and medical real-life use cases such as in the breast cancer use case we
               consider in this work. In order to overcome these challenges, there is a need for a strong interaction between
               the XAI system and the decision makers. In our case, the domain experts, radiologists and physicians need
               to examine the XAI results and add their own perspectives based on their prior knowledge before making
               final decisions. In addition, they can add their feedback in order to improve and fine-tune the XAI system.
               Another way to increase the trustworthiness of the XAI can be synergy between different XAI approaches
               and algorithms. In our case, we use Local Interpretable Model-Agnostic Explanations (LIME) and Shapley
               Additive exPlanations (SHAP). Each of them has a different approach to extract the explanations of the model
               predictions. When both XAI approaches provide the same or similar results, it is an indication that the user
               can have higher confidence in the interpretability of the model.


               To realize the immense economic and functional potential in AI applications that have stringent safety and
               mission critical requirements in areas such as healthcare, transportation, aerospace, cybersecurity, and manu-
               facturing, existing vulnerabilities need to be clearly identified and addressed. The end user of such applications
               as well as the taxpaying public will need assurances that the fielded systems can be trusted to deliver as asked.
               Moreover, recent developments evaluating the trustworthiness of high-performing “black-box” AI have clas-
               sified them using the term “Brittle AI”, as a retrospective look at DARPA’s explainable AI program. These
               developments coupled with a growing belief in the need for “Explainable AI” have led major policy makers in
               the US and Europe to underscore the importance of ”Responsible AI”.


               Recently, on June 28, 2022, a group of Cruise robotaxis abruptly stopped working on a street in San Francisco,
               California,whichcausedtraffictostopforseveralhoursuntilemployeesofthecompanyarrived. Cruise,which
               is backed by General Motors and Honda, has been testing its technology in San Francisco since February, but
               only launched a commercial robotaxi service a week prior to this malfunction. The cars have no human driver
               at all but operate under certain restrictions (good weather and a speed limit of 30mph). They only offer the taxi
                                                                                    [7]
               service in a dedicated area of the city during after-hours between 10PM and 6AM . While no one was hurt
               in this instance, several questions are raised concerning the maturity of the autonomous system technology
               and the need to ensure that these autonomous systems operate as intended. The outcome is that the public
               is concerned and does not trust such systems. In order to handle such events in future, we can find several
                                                                                                  [8]
               approaches in literature. Some of the methods include observer fault estimation based on sensors , nature
                                   [9]
               optimal control systems and predictive control models [10] . All the approaches add a layer to the system that
               is supposed to detect any faulty behavior of the system. The mission in such cases is to translate the predictions
               of the control systems into a way that its operators and decision makers will be able to understand. The system
               has to provide a way to explain what happened and what action has to be taken by humans. This is one of the
               deliverables that XAI is supposed to yield.

               According to the National Institute of Standards and Technology (NIST) [11] , determining that an AI system is
   1   2   3   4   5   6   7   8   9   10   11