Page 37 - Read Online
P. 37

Page 6                            Chazhoor et al. Intell Robot 2022;2:1-19  https://dx.doi.org/10.20517/ir.2021.15





































                                                                                      [31]
                                 Figure 5. Architecture of ResNeXt. (Figure is redrawn and quoted from Go et al.  )
               2.2.4. MobileNet_v2
               MobileNet_v2 is a CNN architecture built on an inverted residual structure, shortcut connections between
               narrow bottleneck layers to improve the mobile and embedded vision systems. A Bottleneck Residual Block
               is a type of residual block that creates a bottleneck using 1 × 1 convolutions. The number of parameters and
               matrix multiplications can be reduced by using a bottleneck. The goal is to make residual blocks as small as
               possible so that depth may be increased, and the parameters can be reduced. The model uses ReLU as the
               activation function. The architecture comprises a 32-filter convolutional layer at the top, followed by 19
               bottleneck layers . The architecture of MobileNet_v2 is shown in Figure 6.
                             [24]

               2.2.5. DenseNet
               Using a feed-forward system, DenseNet connects each layer to every other layer. Layers are created using
               feature maps from all previous levels, and their feature maps are utilized in all future layers to create new
               layers. They solve the vanishing-gradient problem and improve feature propagation and reuse while
               reducing the number of parameters significantly. The architecture of DenseNet is shown in Figure 7.


               2.2.6. SqueezeNet
               SqueezeNet is a small CNN that shrinks the network by reducing parameters while maintaining adequate
               accuracy. An entirely new building block has been introduced in the form of SqueezeNet’s Fire module. A
               Fire module consists of a squeeze convolution layer containing only a 1 × 1 filter, which feeds into an
               expand layer having a combination of 1 × 1 and 3 × 3 convolution filters. Starting with an independent
               convolution layer, SqueezeNet then moves to 8 Fire modules before concluding with a final convolution
               layer. The architecture of SqueezeNet is shown in Figure 8.
   32   33   34   35   36   37   38   39   40   41   42