Page 71 - Read Online
P. 71

Harib et al. Intell Robot 2022;2(1):37-71  https://dx.doi.org/10.20517/ir.2021.19       Page 57

               Table 4. NN-based control in robotic manipulation - an overview
                Approach     Employed by…
                Backpropagation  Elsley [98]  (1988), Huan et al. [109]  (1988), Karakasoglu and Sundareshan [100]  (1990) and Wang and Yeh [110]  (1990)
                                    [103-108]
                CMAC learning  Miller et al.   (1987-1990)
                                    [109]          [139]
                Adaptive NNs/PG   Huan et al.   (1988) and He et al.   (2017)
                table
                                   [133]           [128]          [131,132]             [134]
                NNs for flexible joints Hui et al.   (2002), Gueaieb et al.   (2003), Chaoui et al.   (2004), Subudhi and Morris   (2006),
                                     [130]               [126]        [140]           [142]
                             Chaoui et al.   (2006), Chaoui and Gueaieb   (2008), He et al.   (2017) and Sun et al.   (2017)
                                   [136]        [137]         [138]
                NNs for multiple arms Hou et al.   (2010), Li and Su   (2013) and Li et al.   (2014)
                Feedforward and   Chaoui et al. [135]  (2009)
                feedback
                RNNs
                                  [101]
                Hopfield net  Xu et al.   (1990)
                Comparison   Wilhelmsen and Cotter [102]  (1990)
               NNs: Neural Networks; CMAC: cerebellar model articulation controller; RNNs: recurrent NNs.































                                               Figure 8. Classification of AI categories.

               objectives. It combines function approximation and goal optimization to map states and actions to the
               rewards they result in. The combination of NN with RL algorithms led to the creation of astounding
               breakthroughs like Deepmind’s AlphaGo, an algorithm that beat the world champions of the Go board
               game .
                    [147]

               As mentioned earlier, RL is a powerful technique for achieving optimal control in robotic systems.
               Traditional optimal control has the drawback of requiring complete understanding of the system’s
               dynamics. Furthermore, because the design is often done offline, it is unable to deal with the changing
               dynamics of a system during operation, such as service robots that must execute a variety of duties in an
               unstructured and dynamic environment. The first chapter of this paper has shown that adaptive control, on
   66   67   68   69   70   71   72   73   74   75   76