Page 53 - Read Online
P. 53

Harib et al. Intell Robot 2022;2(1):37-71  https://dx.doi.org/10.20517/ir.2021.19                                                              Page 39


























                                         Figure 1. Simple schematic of a two-link robot manipulator.

               is the PILCO approach .
                                  [11]

               Artificial Neural Networks (ANNs) have been explored for a long time in the hopes of obtaining human-
               like performance in speech and image processing. Several types of Neural Networks (NNs) appear to be
               promising candidates for control system applications. Multilayer NNs (MLNs), recurrent NNs (RNNs), and
               the cerebellar model articulation controller (CMAC) are examples of these. The decision of the NN to
               employ and which training technique to utilize is crucial, and it changes according to the application. The
               type of NNs most commonly used in control systems is the feedforward MLNs, where no information is fed
               back during operation. There is, however, feedback information available during training. Typically,
               supervised learning methods, where the neural network is trained to learn input-output patterns presented
               to it, are used. Most often, versions of the backpropagation (BP) algorithm are used to adjust the NN’s
               weights during training. The feedforward MLNs are the most often employed NNs in control systems since
               no information is fed back during operation. During training, however, there is feedback information
               accessible. In most cases, supervised learning methods are utilized, in which the NN is taught to learn input-
               output patterns that are provided to it. During training, variants of the BP algorithm are frequently
               employed to change the NN weights. More details about NNs, for dynamical systems in general and for
               robotics in particular, are discussed in section 3 of this work.


               In robotics, it all boils down to making the actuator perform the desired action. The basics of control
               systems tell us that the transfer function decides the relationship between the output and the input given the
               system or plant. While purely control-based robots use the system model to define their input-output
               relations, AI-based robots may or may not use the system model and rather manipulate the robot based on
               the experience they have with the system while training or possibly enhance it in real-time as well.

               Reinforcement learning (RL) is a type of experience-based learning that may be used in robotics when on-
               line learning without knowledge of the environment is necessary. The controller may learn which of its
               possible actions will result in the greatest performance for a particular job for each of its states. If the mobile
               robot collides with an obstacle, for example, it will learn that this is a poor action, but if it achieves the
               objective, it will learn that this is a positive activity. Reinforcement or reward is the term for such contextual
               feedback. The goal of the controller is to maximize its predicted future rewards for state-action pairings,
   48   49   50   51   52   53   54   55   56   57   58