Page 32 - Read Online
P. 32
Page 14 of 16 Zander et al. Complex Eng Syst 2023;3:9 I http://dx.doi.org/10.20517/ces.2023.11
DECLARATIONS
Authors’ contributions
Wrote and reviewed the paper: Zander E, van Oostendorp B, Bede B
Availability of data and materials
There are no applicable datasets, but the implementation of a PyTorch compatible ANFIS layer is available here
if relevant: https://github.com/Squeemos/pytorch_anfis.
Financial support and sponsorship
None.
Conflicts of interest
All authors declared that there are no conflicts of interest.
Ethical approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Copyright
© The Author(s) 2023.
REFERENCES
1. Zadeh LA. Fuzzy sets. Inf Contr 1965;8:338–53. DOI
2. Mamdani EH, Assilian S. An experiment in linguistic synthesis with a fuzzy logic controller. Int J Man-Mac Studies 1975;7:1–13. DOI
3. Takagi T, Sugeno M. Fuzzy identification of systems and its applications to modeling and control. In: IEEE Trans Syst , Man, Cybern
1985;8:116–32. DOI
4. Precup RE, Hellendoorn H. A survey on industrial applications of fuzzy control. Comput Industry 2011;62:213–26. DOI
5. Pirovano M. The use of fuzzy logic for artificial intelligence in games. University of Milano, Milano 2012. Available from: https:
//www.michelepirovano.com/pdf/fuzzy_ai_in_games.pdf [Last accessed on 6 Jun 2023]
6. Russell SJ. Artificial intelligence a modern approach. Pearson Education, Inc.; 2010.
7. Silver D, Schrittwieser J, Simonyan K, et al. Mastering the game of Go without human knowledge. Nature 2017;550:354–59. DOI
8. Lakhani AI, Chowdhury MA, Lu Q. Stability-preserving automatic tuning of PID control with reinforcement learning. Complex Eng Syst
2022;2:3. DOI
9. Li Y. Deep reinforcement learning: an overview. arXiv preprint arXiv:170107274 2017. DOI
10. Zhao E, Yan R, Li J, Li K, Xing J. AlphaHoldem: high-Performance artificial intelligence for heads-up no-limit poker via end-to-end
reinforcement learning. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 36; 2022. pp. 4689–97. DOI
11. Shao K, Tang Z, Zhu Y, Li N, Zhao D. A survey of deep reinforcement learning in video games. arXiv preprint arXiv:191210944 2019.
DOI
12. Chen J, Yuan B, Tomizuka M. Model-free deep reinforcement learning for urban autonomous driving. In: 2019 IEEE intelligent trans-
portation systems conference (ITSC). IEEE; 2019. pp. 2765–71. DOI
13. Walker O, Vanegas F, Gonzalez F, Koenig S. A deep reinforcement learning framework for UAV navigation in indoor environments. In:
2019 IEEE Aerospace Conference. IEEE; 2019. pp. 1–14. DOI
14. Ouyang L, Wu J, Jiang X, et al. Training language models to follow instructions with human feedback. J Inf Process Syst 2022;35:27730–
44. DOI
15. Mundhenk TN, Chen BY, Friedland G. Efficient saliency maps for explainable AI. arXiv preprint arXiv:191111293 2019. DOI
16. Borys K, Schmitt YA, Nauta M, et al. Explainable AI in medical imaging: an overview for clinical practitioners–Beyond saliency-based
XAI approaches. Eur J Radiol 2023:110786. DOI
17. Holzinger A, Saranti A, Molnar C, Biecek P, Samek W. Explainable AI methods-a brief overview. In: xxAI-Beyond Explainable AI:
International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers. Springer;
2022. pp. 13–38. DOI
18. Hagras H. Toward human-understandable, explainable AI. Computer 2018;51:28–36. DOI
19. Mencar C, Alonso JM. Paving the way to explainable artificial intelligence with fuzzy modeling. In: Fuzzy Logic and Applications.