Page 94 - Read Online
P. 94

Bah et al. Intell Robot 2022;2(1):72­88  I http://dx.doi.org/10.20517/ir.2021.16            Page 88



               26. Liu Y, Chen Y, Wang J, Niu S, Liu D, Song H. Zero­bias deep neural network for quickest RF signal surveillance. arXiv preprint
                 arXiv:2110.05797, 2021.
               27. Hanin B, Rolnick D. How to start training: The effect of initialization and architecture. arXiv preprint arXiv:1803.01719, 2018.
               28. Datta L. A survey on activation functions and their relation with xavier and he normal initialization. arXiv preprint arXiv:2004.06632,
                 2020.
               29. Bjorck J, Gomes C, Selman B, Weinberger KQ. Understanding batch normalization. arXiv preprint arXiv:1806.02375, 2018.
               30. Santurkar S, Tsipras D, Ilyas A, Mądry A. How does batch normalization help optimization?. In: Proceedings of the 32nd international
                 conference on neural information processing systems. 2018, pp. 2488­98.
               31. You H, Yu L, Tian S, et al. MC­Net: Multiple max­pooling integration module and cross multi­scale deconvolution network. Knowledge­
                 Based Systems 2021;231:107456. DOI
               32. Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov R. Improving neural networks by preventing co­adaptation of feature
                 detectors. CoRR 2012;abs/1207.0580. Available from http://arxiv.org/abs/1207.0580
               33. Yarin G, Jiri H, Alex K. Concrete dropout. arXiv preprint arXiv:1705.07832, 2017.
               34. Chen H, Chen A, Xu L, et al. A deep learning CNN architecture applied in smart near­infrared analysis of water pollution for agricultural
                 irrigation resources. Agricultural Water Management 2020;240:106303. DOI
               35. Goodfellow IJ, Erhan D, Luc Carrier P, et al. Challenges in representation learning: a report on three machine learning contests. Neural
                 Netw 2015;64:59­63. DOI
               36. Song L, Gong D, Li Z, Liu C, Liu W. Occlusion robust face recognition based on mask learning with pairwise differential siamese network.
                 In: Proceedings of the IEEE/CVF International Conference on Computer Vision. IEEE, 2019, pp. 773­82.
               37. Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning. J Big Data 2019;6:1­48. DOI
               38. Gao X, Saha R, Prasad MR, et al. Fuzz testing based data augmentation to improve robustness of deep neural networks. In: 2020
                 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). IEEE, 2020, pp. 1147­58.
               39. Halgamuge MN, Daminda E, Nirmalathas A. Best optimizer selection for predicting bushfire occurrences using deep learning. Nat
                 Hazards 2020;103:845­60. DOI
               40. Zhang Z, Sabuncu MR . Generalized cross entropy loss for training deep neural networks with noisy labels. In: 32nd Conference on
                 Neural Information Processing Systems (NeurIPS). 2018.
               41. Han Z. Predict final total mark of students with ANN, RNN and Bi­LSTM. Available from http://users.cecs.anu.edu.au/~Tom.Gedeon/
                 conf/ABCs2020/paper/ABCs2020_paper_v2_135.pdf.
               42. Li M, Soltanolkotabi M, Oymak S. Gradient descent with early stopping is provably robust to label noise for overparameterized neural
                 networks. In: International conference on artificial intelligence and statistics. PMLR, 2020, pp. 4313­24.
               43. Lucey P, Cohn JF, Kanade T, et al. The extended Cohn­Kanade dataset (CK+): A complete dataset for action unit and emotion­specified
                 expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition ­ Workshops, CVPRW 2010. IEEE,
                 2010, pp. 94­101. DOI
               44. Cheng S, Zhou G. Facial expression recognition method based on improved VGG convolutional neural network. Int J Patt Recogn Artif
                 Intell 2020;34:2056003. DOI
   89   90   91   92   93   94   95   96   97   98   99