Page 51 - Read Online
P. 51
Page 6 of 14 Li et al. J Mater Inf 2024;4:4 I http://dx.doi.org/10.20517/jmi.2023.41
Forthe neighboringelements at each adsorptionsite, weutilized six elemental properties as follows: the atomic
number of the element (Z), the Pauling electronegativity of the element ( ), the number of neighboring atoms
to which the element is coordinated to the adsorbate (CN), the average adsorption energy of the element
(Δ ), the atom distance (D) to the adsorbate H and the valence number (VN). The value was obtained from
the Mendeleev database [29] , and the Δ value was calculated from the adsorption energy database, which
represents the average of the adsorption energies of all catalysts containing this element. Moreover, we modify
the atomic number and Bowling electronegativity values for each layer to the average of the layers. In this way,
the fill valuesare differentfor each case. Therelevant chemicalproperties ofthe cases canbe betterrepresented.
Generally,includingtwolayersofnearestneighborsisregardedaseffectiveincapturingthelocalinformationof
atoms [37,42] . However,intheexperiment,wefoundthatonlyconsideringtwolayersofatomshaspooraccuracy
in some cases, especially for nanoparticles or rough surfaces, because when the adsorption site is located at
the corner, it contains too few neighbor atoms. Therefore, we propose considering incorporating the third
neighbor layer. Experimentally, in surface adsorption, this method can adequately contain the information
needed to calculate the adsorption energy. Our final result demonstrates that only six descriptors for each
layer can make the model reach the highest accuracy.
Applying local environment interaction for convolutional networks
In the latest adsorption energy calculations, graph neural networks are typically employed. Our descriptors
of the LEI-framework can also be transformed into a 6 × 11 matrix, which meets all the necessary criteria
for neural network utilization. As a result, we suggest a creative approach to feeding descriptors of the LEI-
framework into a convolutional neural network in matrix form. By doing so, we can leverage the advantages
of DL in the context of adsorption energy calculations. In this model, for each element in a layer, there are
six features associated with it, which can be represented as:
(1)
= [ 1 , 2 , 3 , 4 , 5 , 6 ]
where each corresponds to a descriptor mentioned earlier.
For an input with three layers and eleven elements, the input X can be represented as:
= [ | ∈ (1, 11)] (2)
Here, isa 6×11matrixthatservesastheinputtoaconvolutionalneuralnetwork, denotedas ( ), consisting
of 32 residual blocks. Each block comprises two convolutional layers and a skip connection, which is used to
construct the layers of a residual network (ResNet). ResNet consists of multiple residual blocks, with four
stages, each comprising 3, 4, 6, and 3 residual blocks, respectively, followed by a skip connection that adds the
input X to the output of the second convolutional layer. The output of the n-th residual block can be written
as:
= + ( −1 ( )) (3)
where represents the residual function of the n-th block, and −1 ( ) is the output of the (n-1)-th block.
Finally, global average pooling and a fully connected layer are used to transform the feature map of the last
layer into a scalar output:

