Page 77 - Read Online
P. 77

Su et al. Intell Robot 2022;2(3):244­74  I http://dx.doi.org/10.20517/ir.2022.17    Page 270

               as an auxiliary tool to incorporate scientific domain knowledge. In addition, causal graphs can exchange the
               causal statements that are under plausible assumptions but lack grounding in established scientific knowledge
               for inferring plausible conclusions. To conclude, causality-based fairness-enhancing approaches are promising
               solutions to reduce discrimination despite having challenges to overcome.



               DECLARATIONS
               Authors’ contributions
               Project administration: Yu G, Yan Z
               Writing-original draft: Su C, Yu G, Wang J
               Writing-review and editing: Yu G, Yan Z, Cui L


               Availability of data and materials
               Not applicable.

               Financial support and sponsorship
               None.

               Conflicts of interest
               All authors declared that they have no conflicts of interest to this work.

               Ethical approval and consent to participate
               Not applicable.

               Consent for publication
               Not applicable.


               Copyright
               © The Author(s) 2022.


               REFERENCES
               1.  Cohen L, Lipton ZC, Mansour Y. Efficient candidate screening under multiple tests and implications for fairness. arXiv preprint
                   arXiv:190511361 2019. DOI
               2.  Schumann C, Foster J, Mattei N, Dickerson J. We need fairness and explainability in algorithmic hiring. In: International Conference
                   on Autonomous Agents and Multi­Agent Systems; 2020. pp. 1716–20. DOI
               3.  Mukerjee A, Biswas R, Deb K, Mathur AP. Multi–objective evolutionary algorithms for the risk–return trade–off in bank loan manage­
                   ment. Int Trans Operational Res 2002;9:583–97. DOI
               4.  Lee MSA, Floridi L. Algorithmic fairness in mortgage lending: from absolute conditions to relational trade­offs. Minds and Machines
                   2021;31:165–91. DOI
               5.  Baker RS, Hawn A. Algorithmic bias in education. Int J Artif Intell Educ 2021:1–41. DOI
               6.  Berk R, Heidari H, Jabbari S, Kearns M, Roth A. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods
                   & Research 2021;50:3–44. DOI
               7.  Chouldechova A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data 2017;5:153–63.
                   10.1089/big.2016.0047] DOI
               8.  Dwork C, Hardt M, Pitassi T, Reingold O, Zemel R. Fairness through awareness. In: Proceedings of the Innovations in Theoretical
                   Computer Science Conference; 2012. pp. 214–26. DOI
               9.  Hardt M, Price E, Srebro N. Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems;
                   2016. pp. 3315–23. DOI
               10.  Pearl J. Causality: models, reasoning and inference. New York, NY, USA: Cambridge University Press; 2009.
               11.  Kusner MJ, Loftus J, Russell C, Silva R. Counterfactual fairness. In: Advances in Neural Information Processing Systems; 2017. pp.
                   4069–79. DOI
               12.  Russell C, Kusner MJ, Loftus J, Silva R. When worlds collide: integrating different counterfactual assumptions in fairness. In: Advances
                   in Neural Information Processing Systems; 2017. pp. 6414–23. DOI
   72   73   74   75   76   77   78   79   80   81   82