Page 79 - Read Online
P. 79

Su et al. Intell Robot 2022;2(3):244­74  I http://dx.doi.org/10.20517/ir.2022.17    Page 272


                   from: http://arxiv.org/abs/1707.00044.
               48.  Kamishima T, Akaho S, Asoh H, Sakuma J. Fairness­aware classifier with prejudice remover regularizer. In: Joint European Conference
                   on Machine Learning and Knowledge Discovery in Databases; 2012. pp. 35–50. DOI
               49.  Zafar MB, Valera I, Rogriguez MG, Gummadi KP. Fairness constraints: mechanisms for fair classification. In: Artificial Intelligence
                   and Statistics; 2017. pp. 962–70. Available from: http://proceedings.mlr.press/v54/zafar17a.html.
               50.  Zafar MB, Valera I, Gomez Rodriguez M, Gummadi KP. Fairness beyond disparate treatment and disparate impact: learning classification
                   without disparate mistreatment. In: The Web Conference; 2017. pp. 1171–80. DOI
               51.  Hu Y, Wu Y, Zhang L, Wu X. Fair multiple decision making through soft interventions. Adv Neu Inf Pro Syst 2020;33:17965–75. DOI
               52.  Garg S, Perot V, Limtiaco N, et al. Counterfactual fairness in text classification through robustness. In: Proceedings of the AAAI/ACM
                   Conference on AI, Ethics, and Society; 2019. pp. 219–26. DOI
               53.  Di Stefano PG, Hickey JM, Vasileiou V. Counterfactual fairness: removing direct effects through regularization. arXiv preprint
                   arXiv:200210774 2020. Available from: https://arxiv.org/abs/2002.10774.
               54.  Kim H, Shin S, Jang J, et al. Counterfactual fairness with disentangled causal effect variational autoencoder. In: AAAI Conference on
                   Artificial Intelligence; 2021. pp. 8128–36. Available from: https://ojs.aaai.org/index.php/AAAI/article/view/16990.
               55.  Corbett­Davies S, Pierson E, Feller A, Goel S, Huq A. Algorithmic decision making and the cost of fairness. In: Proceedings of the
                   ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2017. pp. 797–806. DOI
               56.  Dwork C, Immorlica N, Kalai AT, Leiserson M. Decoupled classifiers for group­fair and efficient machine learning. In: International
                   Conference on Fairness, Accountability and Transparency; 2018. pp. 119–33. http://proceedings.mlr.press/v81/dwork18a.html
               57.  Wu Y, Zhang L, Wu X. Counterfactual fairness: unidentification, bound and algorithm. In: International Joint Conference on Artificial
                   Intelligence; 2019. pp. 1438–44. DOI
               58.  Kusner M, Russell C, Loftus J, Silva R. Making decisions that reduce discriminatory impacts. In: International Conference on Machine
                   Learning; 2019. pp. 3591–600. Available from: http://proceedings.mlr.press/v97/kusner19a/kusner19a.pdf.
               59.  Mishler A, Kennedy EH, Chouldechova A. Fairness in risk assessment instruments: post­processing to achieve counterfactual equalized
                   odds. In: ACM Conference on Fairness, Accountability, and Transparency; 2021. pp. 386–400. DOI
               60.  Woodworth B, Gunasekar S, Ohannessian MI, Srebro N. Learning non­discriminatory predictors. In: Conference on Learning Theory;
                   2017. pp. 1920–53. Available from: http://proceedings.mlr.press/v65/woodworth17a.html.
               61.  Calders T, Verwer S. Three naive Bayes approaches for discrimination­free classification. Data Min Knowl Discov 2010;21:277–92.
                   DOI
               62.  Friedler SA, Scheidegger C, Venkatasubramanian S, et al. A comparative study of fairness­enhancing interventions in machine learning.
                   In: Proceedings of the Conference on Fairness, Accountability, and Transparency; 2019. pp. 329–38. DOI
               63.  Martínez­Plumed F, Ferri C, Nieves D, Hernández­Orallo J. Fairness and missing values. arXiv preprint arXiv:190512728 2019. Avail­
                   able from: http://arxiv.org/abs/1905.12728.
               64.  Bareinboim E, Pearl J. Causal inference and the data­fusion problem. Proc Natl Acad Sci 2016;113:7345–52. DOI
               65.  Spirtes P, Meek C, Richardson T. Causal inference in the presence of latent variables and selection bias. In: Conference on Uncertainty
                   in Artificial Intelligence; 1995. pp. 499–506. DOI
               66.  Goel N, Amayuelas A, Deshpande A, Sharma A. The importance of modeling data missingness in algorithmic fairness: a causal per­
                   spective. In: AAAI Conference on Artificial Intelligence. vol. 35; 2021. pp. 7564–73. Available from: https://ojs.aaai.org/index.php/A
                   AAI/article/view/16926.
               67.  Burke R. Multisided fairness for recommendation. arXiv preprint arXiv:170700093 2017. Available from: http://arxiv.org/abs/1707.0
                   0093.
               68.  Wu Y, Zhang L, Wu X. On discrimination discovery and removal in ranked data using causal graph. In: Proceedings of the ACM
                   SIGKDD International Conference on Knowledge Discovery and Data Mining; 2018. pp. 2536–44. DOI
               69.  Zhao Z, Chen J, Zhou S, et al. Popularity Bias Is Not Always Evil: Disentangling Benign and Harmful Bias for Recommendation. arXiv
                   preprint arXiv:210907946 2021. Available from: https://arxiv.org/abs/2109.07946.
               70.  Zheng Y, Gao C, Li X, et al. Disentangling user interest and conformity for recommendation with causal embedding. In: The Web
                   Conference; 2021. pp. 2980–91. DOI
               71.  Zhang Y, Feng F, He X, et al. Causal intervention for leveraging popularity bias in recommendation. In: Proceedings of the 44th
                   International ACM SIGIR Conference on Research and Development in Information Retrieval; 2021. pp. 11–20. DOI
               72.  Wang W, Feng F, He X, Zhang H, Chua TS. Clicks can be cheating: counterfactual recommendation for mitigating clickbait issue.
                   In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval; 2021. pp.
                   1288–97. DOI
               73.  Li Y, Chen H, Xu S, Ge Y, Zhang Y. Towards personalized fairness based on causal notion. In: International ACM SIGIR Conference
                   on Research and Development in Information Retrieval; 2021. pp. 1054–63. DOI
               74.  Huang W, Zhang L, Wu X. Achieving counterfactual fairness for causal bandit. In: AAAI Conference on Artificial Intelligence; 2022.
                   pp. 6952–59. DOI
               75.  Zhao J, Wang T, Yatskar M, Ordonez V, Chang KW. Men also like shopping: reducing gender bias amplification using corpus­level
                   constraints. In: Conference on Empirical Methods in Natural Language Processing; 2017. pp. 2979–89. DOI
               76.  Stanovsky G, Smith NA, Zettlemoyer L. Evaluating gender bias in machine translation. In: Proceedings of the 57th Annual Meeting of
                   the Association for Computational Linguistics; 2019. pp. 1679–84. DOI
               77.  Huang PS, Zhang H, Jiang R,et al.  Reducing sentiment bias in language models via counterfactual evaluation.  arXiv preprint
   74   75   76   77   78   79   80   81   82   83   84