Page 80 - Read Online
P. 80
Page 273 Su et al. Intell Robot 2022;2(3):24474 I http://dx.doi.org/10.20517/ir.2022.17
arXiv:191103064 2019. DOI
78. Shin S, Song K, Jang J, et al. Neutralizing gender bias in word embeddings with latent disentanglement and counterfactual generation.
In: Empirical Methods in Natural Language Processing Conference; 2020. pp. 3126–40. DOI
79. Yang Z, Feng J. A causal inference method for reducing gender bias in word embedding relations. In: AAAI Conference on Artificial
Intelligence; 2020. pp. 9434–41. DOI
80. Lu K, Mardziel P, Wu F, Amancharla P, Datta A. Gender bias in neural natural language processing. In: Logic, Language, and Security;
2020. pp. 189–202. DOI
81. Chen IY, Szolovits P, Ghassemi M. Can AI help reduce disparities in general medical and mental health care? AMA J Ethics 2019;21:167–
79. DOI
82. Zink A, Rose S. Fair regression for health care spending. Biometrics 2020;76:973–82. DOI
83. Pfohl SR, Duan T, Ding DY, Shah NH. Counterfactual reasoning for fair clinical risk prediction. In: Machine Learning for Healthcare
Conference; 2019. pp. 325–58. Available from: http://proceedings.mlr.press/v106/pfohl19a.html.
84. Pfohl SR, Foryciarz A, Shah NH. An empirical characterization of fair machine learning for clinical risk prediction. J Biomed Inform
2021;113:103621. DOI
85. Ramsey JD, Zhang K, Glymour M, et al. TETRAD—A toolbox for causal discovery. In: International Workshop on Climate Informatics;
2018. Available from: http://www.phil.cmu.edu/tetrad/.
86. Zhang K, Ramsey J, Gong M, et al. Causallearn: Causal discovery for Python; 2022. https://github.com/cmuphil/causallearn.
87. Wongchokprasitti CK, Hochheiser H, Espino J,et al.. bd2kccd/pycausal v1.2.1; 2019. https://doi.org/10.5281/zenodo.3592985.
88. Runge J, Nowack P, Kretschmer M, Flaxman S, Sejdinovic D. Detecting and quantifying causal associations in large nonlinear time
series datasets. Sci Adv 2019;5:eaau4996. DOI
89. Zhang K, Zhu S, Kalander M, et al. gCastle: a python toolbox for causal discovery. arXiv preprint arXiv:211115155 2021. Available
from: https://arxiv.org/abs/2111.15155.
90. Chen H, Harinen T, Lee JY, Yung M, Zhao Z. Causalml: Python package for causal machine learning. arXiv preprint arXiv:200211631
2020. Available from: https://arxiv.org/abs/2002.11631.
91. Tingley D, Yamamoto T, Hirose K, Keele L, Imai K. mediation: R package for causal mediation analysis. J Stat Softw 2014;59:1–38.
DOI
92. Tikka S, Karvanen J. Identifying causal effects with the R Package causaleffect. J Stat Softw 2017;76:1–30. DOI
93. Sharma A, Kiciman E. DoWhy: an endtoend library for causal inference. arXiv preprint arXiv:201104216 2020. Available from:
https://arxiv.org/abs/2011.04216.
94. Bellamy RK, Dey K, Hind M, et al. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM J Res Dev
2019;63:1–15. DOI
95. Bird S, Dudík M, Edgar R, et al. Fairlearn: A toolkit for assessing and improving fairness in AI. Technical Report MSRTR202032,
Microsoft 2020.
96. Geiger D, Heckerman D. Learning gaussian networks. In: Conference on Uncertainty in Artificial Intelligence; 1994. pp. 235–43. DOI
97. Janzing D, Schölkopf B. Causal inference using the algorithmic Markov condition. IEEE Trans Inf Theory 2010;56:5168–94. DOI
98. Kalainathan D, Goudet O, Guyon I, LopezPaz D, Sebag M. Structural agnostic modeling: adversarial learning of causal graphs. arXiv
preprint arXiv:180304929 2018. Available from: https://doi.org/10.48550/arXiv.1803.04929.
99. Hoyer PO, Shimizu S, Kerminen AJ, Palviainen M. Estimation of causal effects using linear nonGaussian causal models with hidden
variables. Int J Approx Reason 2008;49:362–78. DOI
100. Huang Y, Valtorta M. Identifiability in causal Bayesian networks: a sound and complete algorithm. In: National Conference on Artificial
Intelligence; 2006. pp. 1149–54. DOI
101. Tian J. Identifying linear causal effects. In: AAAI Conference on Artificial Intelligence; 2004. pp. 104–11. DOI
102. Shpitser I. Counterfactual graphical models for longitudinal mediation analysis with unobserved confounding. Cogn Sci 2013;37:1011–
35. DOI
103. Malinsky D, Shpitser I, Richardson T. A potential outcomes calculus for identifying conditional pathspecific effects. In: International
Conference on Artificial Intelligence and Statistics; 2019. pp. 3080–88. Available from: http://proceedings.mlr.press/v89/malinsky19b.
html. [PMID: 31886462]
104. Shpitser I, Pearl J. Identification of conditional interventional distributions. In: Conference on Uncertainty in Artificial Intelligence;
2006. pp. 437–44. DOI
105. Tian J, Pearl J. A general identification condition for causal effects. eScholarship, University of California; 2002.
106. Shpitser I, Pearl J. What counterfactuals can be tested. In: Conference on Uncertainty in Artificial Intelligence; 2007. pp. 352–59. DOI
107. Avin C, Shpitser I, Pearl J. Identifiability of pathspecific effects. In: International Joint Conference on Artificial Intelligence; 2005. pp.
357–63. DOI
108. Hu Y, Wu Y, Zhang L, Wu X. A generative adversarial framework for bounding confounded causal effects. In: AAAI Conference on
Artificial Intelligence; 2021. p. 12104–12112. Available from: https://ojs.aaai.org/index.php/AAAI/article/view/17437.
109. Louizos C, Shalit U, Mooij JM, et al. Causal effect inference with deep latentvariable models; 2017. pp. 6446–56. DOI
110. Guo R, Li J, Liu H. Learning individual causal effects from networked observational data. In: International Conference on Web Search
and Data Mining; 2020. pp. 232–40. DOI
111. Guo R, Li J, Liu H. Counterfactual evaluation of treatment assignment functions with networked observational data. In: Proceedings of
the SIAM International Conference on Data Mining; 2020. pp. 271–79. DOI