Page 75 - Read Online
P. 75

Su et al. Intell Robot 2022;2(3):244­74  I http://dx.doi.org/10.20517/ir.2022.17    Page 268

               ment. A complementary work [121]  shows the importance of causal modeling in dynamic systems. However,
               due to the complexity of the real-world environment, it is impossible to model the real environment at a high
               level. Besides, current studies are carried out on low-dimensional data. Therefore, how to highlight important
               dynamics in simulations and effectively use collected data to ensure an appropriate balance between results
               and real-world applicability and how to adapt to high-dimensional data are current challenges. In addition,
               future causality-based fairness-enhancing studies can be combined with dynamic game theory for improving
               fairness in the confrontation environment and research the detection mechanism of dynamic fairness.


               7.5. Other challenges
               AI has become more and more mature after rapid development. Although most of the research on AI thus far
               has focused on weak AI, the design of strong AI (or human-level AI) will be increasingly vital and receive more
               and more attention in the near future. Weak AI only focuses on solving the given tasks input into the program,
               whilestrongAIorhuman-levelAI(HLAI)meansthatitsabilityofthinkingandactioniscomparabletothatofa
               human. Therefore,developingHLAIwillfacemorechallenges. Saghirietal. [122]  comprehensivelysummarized
               the challenges of designing HLAI. As they said, unfairness issues are closely related to other challenges in AI.
               There is still a gap between solving the unfairness problem in AI alone and building a trustworthy AI. Next,
               this review discusses the relationship between fairness and the other challenges in AI.


               Fairness and robustness. The robustness of the AI model is manifested in its outer generalization ability, that
               is, when the input data change abnormally, the performance of the AI model remains stable. AI models with
               poor robustness are prone to crash and, thus, fail to achieve fairness. In addition, the attacker may obtain
               private information about the training data and even the training data themselves, although the attacker has
               no illegal access to the data. However, the research on robustness is still in its infancy, and the theory and
               notions of robustness are still lacking currently.


               Fairness and interpretability. The explainability of discrimination is very important to improving users’ un-
               derstanding and trust in AI models, which is even required by law in many fields. On the other hand, in-
               terpretability can explain and judge whether the fairness of AI models is satisfied or not, which assists in
               improving the fairness of AI models. In some important areas, e.g., healthcare, this challenge becomes more
               serious because it requires that any type of decision-making must be fair and interpretable.


               Causality-based methods are promising solutions to these challenges, as they can not only reveal the mech-
               anisms by which data are generated but also enable a better understanding of the causes of discrimination.
               Of course, there are far more challenges faced by HLAI than those above, and more about the challenges of
               designing HLAI can be found in Saghiri et al.’s work [122] .


               7.6. Future trends
               More realistic application scenarios. Most of the early studies are carried out under some strong assumptions,
               e.g., the assumption that there are no hidden confounders between observational variables. However, these
               assumptions are difficult to satisfy in practical applications, which leads to erroneous evaluation. Therefore,
               the trained model cannot guarantee that it satisfies the fairness requirement. The current studies tend to relax
               these assumptions and address the unfairness issue of algorithms in more general scenarios.

               Privacy protection. Due to legal requirements, sensitive attributes are often inaccessible in real applications.
               Fairness constraints require predictors to be in some way independent of the attributes of group members.
               Privacy preservation raises the same question: Is it possible to guarantee that even the strongest adversary
               cannot steal an individual’s private information through inference attacks? Causal modeling of the problem
               not only is helpful to solve the fairness issue but also enables stronger privacy-preserving than statistics-based
               methods [123] . Combining the existing fairness mechanism with differential privacy is a promising research
   70   71   72   73   74   75   76   77   78   79   80