Page 15 - Read Online
P. 15
Sathyan et al. Complex Eng Syst 2022;2:18 I http://dx.doi.org/10.20517/ces.2022.41 Page 11 of 12
Availability of data and materials
Not applicable.
Financial support and sponsorship
Research reported in this paper was supported by National Institute of Mental Health of the National Institutes
of Health under award number R01MH125867.
Conflicts of interest
All authors declared that there are no conflicts of interest.
Ethical approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Copyright
© The Author(s) 2022.
REFERENCES
1. Došilović FK, Brčić M, Hlupić N. Explainable artificial intelligence: a survey. In: 2018 41st International convention on information and
communication technology, electronics and microelectronics (MIPRO). IEEE; 2018. pp. 0210–15. DOI
2. Markus AF, Kors JA, Rijnbeek PR. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive
survey of the terminology, design choices, and evaluation strategies. J Biomed Inform 2021;113:103655. DOI
3. Ribeiro MT, Singh S, Guestrin C. Model-agnostic interpretability of machine learning. arXiv preprint arXiv:160605386 2016.
4. Gilad-Bachrach R, Navot A, Tishby N. An information theoretic tradeoff between complexity and accuracy. In: Learning Theory and
Kernel Machines. Springer; 2003. pp. 595–609.
5. Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable AI: a review of machine learning interpretability methods. Entropy
2020;23:18. DOI
6. de Bruijn H, Warnier M, Janssen M. The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making.
Government Information Quarterly 2022;39:101666. DOI
7. Khalid A. A swarm of Cruise robotaxis blocked San Francisco traffic for hours; 2022. Available from: https://www.engadget.com/cruis
e-driverless-taxis-blocked-san-francisco-traffic-for-hours-robotaxi-gm-204000451.html. [Last accessed on 22 Dec 2022]
8. Djordjević V, Stojanović V, Pršić D, Dubonjić L, Morato MM. Observer-based fault estimation in steer-by-wire vehicle. Eng Today
2022;1:7–17. DOI
9. Pršić D, Nedić N, Stojanović V. A nature inspired optimal control of pneumatic-driven parallel robot platform. Proceedings of the
Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 2017;231:59–71. DOI
10. Morato MM, Bernardi E, Stojanovic V. A qLPV nonlinear model predictive control with moving horizon Estimation. Complex Eng Syst
2021;1:5. DOI
11. Stanton B, Jensen T. Trust and artificial intelligence. preprint 2021. Available from: https://tsapps.nist.gov/publication/get_pdf.cfm?pub
_id=931087 [Last accessed on 22 Dec 2022]
12. U.S. department of defense responsible artificial Intelligence strategy and implementation pathway. Department of Defense; 2022. Avail-
able from: https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/Department-of-Defense-Responsible-Artificial-Intelligence-Str
ategy-and-Implementation-Pathway.PDF [Last accessed on 22 Dec 2022]
13. Singh A, Sengupta S, Lakshminarayanan V. Explainable deep learning models in medical image analysis. J Imaging 2020;6:52. DOI
14. Ribeiro MT, Singh S, Guestrin C. ” Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2016. pp. 1135–44.
15. Dieber J, Kirrane S. Why model why? Assessing the strengths and limitations of LIME. arXiv preprint arXiv:201200093 2020. DOI
16. Lundberg SM, Lee SI. A unified approach to interpreting model predictions. Available from: https://proceedings.neurips.cc/paper/2017/
hash/8a20a8621978632d76c43dfd28b67767-Abstract.html [Last accessed on 22 Dec 2022]
17. Vo TH, Nguyen NTK, Kha QH, Le NQK. On the road to explainable AI in drug-drug interactions prediction: a systematic review. Comput
Struct Biotechnol J 2022;20:2112-23. DOI
18. Kha QH, Tran TO, Nguyen VN, et al. An interpretable deep learning model for classifying adaptor protein complexes from sequence
information. Methods 2022;207:90–96. DOI
19. Durán JM. Dissecting scientific explanation in AI (sXAI): a case for medicine and healthcare. Art Int 2021;297:103498. DOI
20. Shaban-Nejad A, Michalowski M, Buckeridge DL. Explainability and interpretability: keys to deep medicine. In: Explainable AI in
Healthcare and Medicine. Springer; 2021. pp. 1–10.
21. Naser M. Deriving mapping functions to tie anthropometric measurements to body mass index via interpretable machine learning. Machine
Learning with Applications 2022;8:100259. DOI