Page 22 - Read Online
P. 22
Page 16 of 21 Zhou et al. J Mater Inf 2022;2:18 https://dx.doi.org/10.20517/jmi.2022.27
solution strengthening, precipitation hardening and TWIP/TRIP. The main obstacles to designing
multiphase HEAs and obtaining optimal AM parameters lie in tedious experiments. ML would be an
effective way to solve this critical problem by simplifying the relationship between the descriptor and
targeted properties without considering the complex physical metallurgy process. However, there still exist
some critical issues that need to be solved in the future:
Although as-printed multiphase HEAs show excellent properties at room temperature, their mechanical
properties at elevated temperatures are rarely reported. Like Ni-based superalloys, HEAs undergo an
embrittling behavior at intermediate temperatures of ~650-900 °C, which is known as
intermediate-temperature embrittlement [123-125] . This kind of behavior may also exist in as-printed HEAs and
how to solve this key problem should be one of the focuses of future research.
Microstructure and phase stability at elevated temperatures essentially determine the working temperature
range and engineering reliability of as-printed HEAs. High-density dislocation networks are thought as one
of the main reasons for the enhanced properties of the as-printed sample at room temperature. How these
microstructures evolve at elevated temperatures is a matter of concern. More importantly, these dislocation
structures may significantly influence the resistance against high-temperature creep and oxidation, which
[97,126]
also deserves detailed studies .
ML is expected to provide an effective method to screen alloys with desired properties and obtain optimal
AM parameters without tedious experiments. However, there are many possible ML algorithms and
[112]
material descriptors, resulting in numerous possibilities for predictive results . Thus, a reasonable method
is needed to rapidly select the best combination of the descriptors and ML algorithms. In contrast, many
ML algorithms, especially those involving deep learning, lack interpretability and are often considered as
[127]
black boxes . Sometimes, understanding the reasons behind the decision is more important than the
decision that has been made. Therefore, efforts should be made regarding the interpretability of ML models
to improve their efficiency and accuracy.
DECLARATIONS
Acknowledgements
The authors acknowledge members from Yang’s Group for discussions towards the preparation of this
work.
Authors’ contributions
Proposed the review and wrote the manuscript: Zhou Y
Collated, analyzed, and organized the literature: Zhang Z, Wang D, Xiao W
Discussion of some key points in this review paper: Zhou Y, Ju J, Liu S, Xiao B
Provided supervision, acquired funding, and provided stylistic/grammatical revision on the manuscript:
Yan M, Yang T
Availability of data and materials
Not applicable.
Financial support and sponsorship
This research is supported by the Shenzhen Science and Technology Program (Grant No.
SGDX20210823104002016), the National Natural Science Foundation of China (No. 52101151), the Hong
Kong Research Grant Council (RGC) (Grant No. CityU 21205621), the Shenzhen Science and Technology
Innovation Commission (JCYJ20180504165824643).