Page 41 - Read Online
P. 41

Page 10                           Chazhoor et al. Intell Robot 2022;2:1-19  https://dx.doi.org/10.20517/ir.2021.15










                                th
               x = accuracy at the i  epoch
                i
                  = mean of the accuracies


               n = total number of epochs (e.g., 20)

               4. DISCUSSION
               In the results section from Table 2, we can observe that ResNeXt architecture performs better than all the
               other architectures discussed in this paper. MobileNet_v2 architecture falls behind ResNeXt architecture
               with 0.1 % accuracy. Considering the time factor, MobileNet_v2 trains faster than ResNext by a minute’s
               advantage. When the data is considerably large, the difference in time factor will increase, giving the
               MobileNet_v2 architecture dominance.

               The validation loss of AlexNet architecture from Table 3 and SqueezeNet architecture from Table 4 does not
               significantly drop compared to other models used in the research and from the graph, it can be observed
               from Figure 10 and Figure 11 that there is a diverging gap between its accuracy loss and validation loss
               curves for both models. Fewer images in the Dataset and multiple classes cause this effect on the AlexNet
               architecture. Similar results can be observed for SqueezeNet from Table 4 and Figure 11, which have a
               similar architecture to AlexNet. Table 5 and Figure 12 represent the training and validation accuracies and
               loss values and their corresponding graphs for the pre-trained Resnet-50 model. From Table 6 and
               Figure 13, we can observe the training and validation accuracy and loss values and their plots for ResNeXt
               architecture. Similarly, from Table 7 and Figure 14, the accuracies and their graphs for MobileNet_v2 can be
               observed. The DenseNet architecture represented in Table 8 and Figure 15 takes the longest time to train
               and has a good accuracy score of 85.58%, which is comparable to the Resnet-50 architecture, having an
               accuracy of 85.54%. The five-fold cross-validation approach tests every data point in the dataset and helps
               improve the overall accuracy.


               Figure 16 shows the AUC and ROC for all the models in this paper. The SqueezeNet and AlexNet
               architecture display the lowest AUC score. MobileNet_v2, Resnet-50, ResNext and DenseNet have a
               comparable AUC score. From the ROC curve, it can be inferred that the models can correctly distinguish
               between the types of plastics in the Dataset. ResNeXt architecture achieves the largest AUC.


               5. CONCLUSION
               When we compare our findings to previous studies in the field, we find that including transfer learning
               reduces total training time significantly. It will be simple to train the existing model and attain improved
               accuracy in a short amount of time if the WaDaBa dataset is enlarged in the future. This paper has
               benchmarked six state-of-the-art models on the WaDaBa plastic dataset by integrating deep transfer
               learning. This work will be laid out as a baseline work for future developments on the WaDaBa dataset. The
               paper focuses on supervised learning for plastic waste classification. Unsupervised learning procedures are
               one area where the article has placed less focus. The latter might be beneficial for pre-training or enhancing
               the supervised classification models using pre-trained feature selection. Pattern decomposition methods
                                                                                                        [41]
                                                                                                     [43]
               like nonnegative matrix factorization  and ensemble joint sparse low rank matrix decomposition  are
                                               [42]
   36   37   38   39   40   41   42   43   44   45   46