Page 26 - Read Online
P. 26
He et al. Intell. Robot. 2025, 5(2), 313-32 I http://dx.doi.org/10.20517/ir.2025.16 Page 325
Table 3. Comparison with other methods on FERPlus dataset
Methods Year Accuracy (%)
CSLD [9] 2016 83.85
ResNet+VGG [61] 2017 87.40
SHCNN [57] 2019 86.54
RAN [45] 2020 88.55
RAN-VGG [45] 2021 89.16
SCN [48] 2020 88.01
VTFF [54] 2021 88.81
PACVT [41] 2023 88.72
GSDNet [32] 2024 90.32
CBAM-4CNN [62] 2024 87.75
MSAFNet(ours) 2025 89.82
The bold format is used to indi-cate
the best (highest) accuracy. CSLD:
Crowd-sourced label dis-tribution;
VGG: visual geometry group networks;
SHCNN: shal-low convolutional neural
network; RAN: region attention
networks; SCN: self-cure networks;
VTFF: visual transformers with feature
fusion; PACVT: patch attention
convolutional vision transformer;
GSDNet: gradual self distillation
network; CBAM-4CNN: convo-
lutional block attention module with
convolutional neural network; MSAFNet:
multi-scale attention and convolution-
transformer fu-sion network.
Figure 7. The confusion matrices of MSAFNet on the FERPlus dataset.

