Page 110 - Read Online
P. 110
Page 14 of 15 Wu et al. J. Mater. Inf. 2025, 5, 15 https://dx.doi.org/10.20517/jmi.2024.67
59. Park, Y.; Kim, J.; Hwang, S.; Han, S. Scalable parallel algorithm for graph neural network interatomic potentials in molecular
dynamics simulations. J. Chem. Theory. Comput. 2024, 20, 4857-68. DOI
60. Merchant, A.; Batzner, S.; Schoenholz, S. S.; Aykol, M.; Cheon, G.; Cubuk, E. D. Scaling deep learning for materials discovery.
Nature 2023, 624, 80-5. DOI PubMed PMC
61. Deng, B.; Zhong, P.; Jun, K.; et al. CHGNet as a pretrained universal neural network potential for charge-informed atomistic
modelling. Nat. Mach. Intell. 2023, 5, 1031-41. DOI
62. Chen, C.; Ong, S. P. A universal graph deep learning interatomic potential for the periodic table. Nat. Comput. Sci. 2022, 2, 718-28.
DOI PubMed
63. Tang, D.; Ketkaew, R.; Luber, S. Machine learning interatomic potentials for heterogeneous catalysis. Chemistry 2024, 30,
e202401148. DOI PubMed
64. Batatia, I.; Kovacs, D. P.; Simm, G. N. C.; Ortner, C.; Csányi, G. MACE: higher order equivariant message passing neural networks
for fast and accurate force fields. arXiv 2022, arXiv:2206.07697. Available online: https://doi.org/10.48550/arXiv.2206.07697
(accessed 15 Jan 2025)
65. Batatia, I.; Batzner, S.; Kovács, D. P.; et al. The design space of e (3)-equivariant atom-centered interatomic potentials. arXiv 2022,
arXiv:2205.06643. Available online: https://doi.org/10.48550/arXiv.2205.06643 (accessed 15 Jan 2025)
66. Riebesell, J.; Goodall, R. E. A.; Benner, P.; et al. Matbench discovery - an evaluation framework for machine learning crystal
stability prediction. arXiv 2023, arXiv:2308.14920. Available online: https://doi.org/10.48550/arXiv.2308.14920 (accessed 15 Jan
2025)
67. Batatia, I.; Benner, P.; Chiang, Y.; et al. A foundation model for atomistic materials chemistry. arXiv 2023, arXiv:2401.00096.
Available online: https://doi.org/10.48550/arXiv.2401.00096 (accessed 15 Jan 2025)
68. Open AI; Achiam J, Adler S, Agarwal S, et al. Gpt-4 technical report. arXiv 2023, arXiv:2303.08774. Available online: https://doi.
org/10.48550/arXiv.2303.08774 (accessed 15 Jan 2025)
69. Yao, Y.; Duan, J.; Xu, K.; Cai, Y.; Sun, Z.; Zhang, Y. A survey on large language model (LLM) security and privacy: the good, the
bad, and the ugly. High. Confid. Comput. 2024, 4, 100211. DOI
70. Chang, Y.; Wang, X.; Wang, J.; et al. A survey on evaluation of large language models. ACM. Trans. Intell. Syst. Technol. 2024, 15,
1-45. DOI
71. Augenstein, I.; Baldwin, T.; Cha, M.; et al. Factuality challenges in the era of large language models and opportunities for fact-
checking. Nat. Mach. Intell. 2024, 6, 852-63. DOI
72. Patil, R.; Gudivada, V. A review of current trends, techniques, and challenges in large language models (LLMs). Appl. Sci. 2024, 14,
2074. DOI
73. Beltagy, I.; Lo, K.; Cohan, A. SciBERT: a pretrained language model for scientific text. arXiv 2019, arXiv:1903.10676. Available
online: https://doi.org/10.48550/arXiv.1903.10676 (accessed 15 Jan 2025)
74. Wang, L.; Chen, X.; Du, Y.; Zhou, Y.; Gao, Y.; Cui, W. CataLM: empowering catalyst design through large language models. arXiv
2024, arXiv:2405.17440. Available online: https://doi.org/10.48550/arXiv.2405.17440 (accessed 15 Jan 2025)
75. Ding, R.; Wang, X.; Tan, A.; Li, J.; Liu, J. Unlocking new insights for electrocatalyst design: a unique data science workflow
leveraging internet-sourced big data. ACS. Catal. 2023, 13, 13267-81. DOI
76. Minh, D.; Wang, H. X.; Li, Y. F.; Nguyen, T. N. Explainable artificial intelligence: a comprehensive review. Artif. Intell. Rev. 2022,
55, 3503-68. DOI
77. Wang, S. H.; Pillai, H. S.; Wang, S.; Achenie, L. E. K.; Xin, H. Infusing theory into deep learning for interpretable reactivity
prediction. Nat. Commun. 2021, 12, 5288. DOI PubMed PMC
78. Ghanekar, P. G.; Deshpande, S.; Greeley, J. Adsorbate chemical environment-based machine learning framework for heterogeneous
catalysis. Nat. Commun. 2022, 13, 5788. DOI PubMed PMC
79. Noh, J.; Gu, G. H.; Kim, S.; Jung, Y. Uncertainty-quantified hybrid machine learning/density functional theory high throughput
screening method for crystals. J. Chem. Inf. Model. 2020, 60, 1996-2003. DOI
80. Abed, J.; Heras-Domingo, J.; Sanspeur, R. Y.; et al. Pourbaix machine learning framework identifies acidic water oxidation catalysts
exhibiting suppressed ruthenium dissolution. J. Am. Chem. Soc. 2024, 146, 15740-50. DOI
81. Zhang, J.; Wang, C.; Huang, S.; et al. Design high-entropy electrocatalyst via interpretable deep graph attention learning. Joule 2023,
7, 1832-51. DOI
82. Deringer, V. L.; Bartók, A. P.; Bernstein, N.; Wilkins, D. M.; Ceriotti, M.; Csányi, G. Gaussian process regression for materials and
molecules. Chem. Rev. 2021, 121, 10073-141. DOI PubMed PMC
83. Ulissi, Z. W.; Singh, A. R.; Tsai, C.; Nørskov, J. K. Automated discovery and construction of surface phase diagrams using machine
learning. J. Phys. Chem. Lett. 2016, 7, 3931-5. DOI PubMed
84. Christensen, A. S.; Bratholm, L. A.; Faber, F. A.; Anatole, L. O. FCHL revisited: faster and more accurate quantum machine
learning. J. Chem. Phys. 2020, 152, 044107. DOI PubMed
85. Xu, W.; Reuter, K.; Andersen, M. Predicting binding motifs of complex adsorbates using machine learning with a physics-inspired
graph representation. Nat. Comput. Sci. 2022, 2, 443-50. DOI PubMed
86. Togninalli, M.; Ghisu, E.; Llinares-López, F.; Rieck, B.; Borgwardt, K. Wasserstein weisfeiler-lehman graph kernels. arXiv 2019,
arXiv:1906.01277. Available online: https://doi.org/10.48550/arXiv.1906.01277 (accessed 15 Jan 2025)
87. Grisafi, A.; Bussy, A.; Salanne, M.; Vuilleumier, R. Predicting the charge density response in metal electrodes. Phys. Rev. Mater.

