Page 31 - Read Online
P. 31

Page 330                        He et al. Intell. Robot. 2025, 5(2), 313-32  I http://dx.doi.org/10.20517/ir.2025.16


               Conflicts of interest
               All authors declared that there are no conflicts of interest.

               Ethical approval and consent to participate
               Not applicable.

               Consent for publication
               Not applicable.


               Copyright
               © The Author(s) 2025.



               REFERENCES
               1.  Bah, I.; Xue, Y. Facial expression recognition using adapted residual based deep neural network. Intell. Robot. 2022, 2, 72-88. DOI
               2.  Suguitan, M.; Depalma, N.; Hoffman, G.; Hodgins, J. Face2Gesture: translating facial expressions into robot movements through shared
                  latent space neural networks. ACM Trans. Hum. Robot Interact 2024, 13, 1-18. DOI
               3.  Ekman, P.; Friesen, W. V. Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 1971, 17, 124. DOI
               4.  Lucey, P.; Cohn, J. F.; Kanade, T.; Saragih, J.; Ambadar, Z.; Matthews, I. The extended Cohn-Kanade Dataset (CK+): a complete
                  dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern
                  Recognition - Workshops, San Francisco, USA. Jun 13-18, 2010. IEEE, 2010; pp. 94–101. DOI
               5.  Lyons, M.; Akamatsu, S.; Kamachi, M.; Gyoba, J. Coding facial expressions with Gabor wavelets. In: Proceedings Third IEEE Interna-
                  tional Conference on Automatic Face and Gesture Recognition, Nara, Japan. Apr 14-16, 1998. IEEE, 1998; pp. 200–5. DOI
               6.  Zhao, G.; Huang, X.; Taini, M.; Li, S. Z.; Pietikäinen, M. Facial expression recognition from near-infrared videos. Image Vis. Comput.
                  2011, 29, 607–19. DOI
               7.  Li, S.; Deng, W.; Du, J. Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In: 2017
                  IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA. Jul 21-26, 2017. IEEE, 2017; pp. 2584-93. DOI
               8.  Dhall, A.; Goecke, R.; Lucey, S.; Gedeon, T. Collecting large, richly annotated facial-expression databases from movies. IEEE MultiMedia
                  2012, 19, 34–41. DOI
               9.  Barsoum, E.; Zhang, C.; Ferrer, C. C.; Zhang, Z. Training deep networks for facial expression recognition with crowd-sourced label
                  distribution. In: Proceedings of the 18th ACM International Conference on Multimodal Interaction. Association for Computing Machinery,
                  2016; pp. 279–83. DOI
               10. Mollahosseini, A.; Chan, D.; Mahoor, M. H. Going deeper in facial expression recognition using deep neural networks. In: 2016 IEEE
                  Winter Conference on Applications of Computer Vision (WACV), Lake Placid, USA. Mar 07-10, 2016. IEEE, 2016; pp. 1–10. DOI
               11. Shao, J.; Qian, Y. Three convolutional neural network models for facial expression recognition in the wild. Neurocomputing 2019, 355,
                  82–92. DOI
               12. Gursesli, M. C.; Lombardi, S.; Duradoni, M.; Bocchi, L.; Guazzini, A.; Lanata, A. Facial emotion recognition (FER) through custom
                  lightweight CNN model: performance evaluation in public datasets. IEEE Access 2024, 12, 45543–59. DOI
               13. Sun, M.; Cui, W.; Zhang, Y.; Yu, S.; Liao, X.; Hu, B. Attention-rectified and texture-enhanced cross-attention transformer feature fusion
                  network for facial expression recognition. IEEE Trans. Ind. Informat. 2023, 19, 11823–32. DOI
               14. Tao, H.; Duan, Q. Hierarchical attention network with progressive feature fusion for facial expression recognition. Neural Netw. 2024,
                  170, 337–48. DOI
               15. Liu, H.; Cai, H.; Lin, Q.; Li, X.; Xiao, H. Adaptive multilayer perceptual attention network for facial expression recognition. IEEE Trans.
                  Circuits Syst. Video Technol 2022, 32, 6253–66. DOI
               16. Zhao, R.; Liu, T.; Huang, Z.; Lun, D. P.; Lam, K. M. Spatial-temporal graphs plus transformers for geometry-guided facial expression
                  recognition. IEEE Trans. Affect. Comput. 2023, 14, 2751–67. DOI
               17. Shan, C.; Gong, S.; McOwan, P. W. Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis.
                  Comput. 2009, 27, 803–16. DOI
               18. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer
                  Vision and Pattern Recognition (CVPR’05), San Diego, USA. Jun 20-25, 2005. IEEE, 2005; pp. 886–93. DOI
               19. Cordea, M. D.; Petriu, E. M.; Petriu, D. C. Three-dimensional head tracking and facial expression recovery using an anthropometric
                  muscle-based active appearance model. IEEE Trans. Instrum. Meas. 2008, 57, 1578–88. DOI
               20. Xu, Z.; Wu, H. R.; Yu, X.; Horadam, K.; Qiu, B. Robust shape-feature-vector-based face recognition system. IEEE Trans. Instrum. Meas.
                  2011, 60, 3781–91. DOI
               21. Ghimire, D.; Jeong, S.; Lee, J.; Park, S. H. Facial expression recognition based on local region specific features and support vector
                  machines. Multimed. Tools Appl. 2017, 76, 7803–21. DOI
               22. Krizhevsky, A.; Sutskever, I.; Hinton, G. E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60,
                  84–90. DOI
   26   27   28   29   30   31   32   33   34   35   36