Page 19 - Read Online
P. 19

de Silva. Intell Robot 2021;1(1):3-17     https://dx.doi.org/10.20517/ir.2021.01                                                                    Page 14

               that information, along with desired inputs, is used to generate control signals that can reduce errors due to
               these unknown inputs or variations in them. The reason for calling this method feedforward control stems
               from the fact that the associated measurement and control (and compensation) take place in the forward
               path of the control system. Both feedback and feedforward schemes may be used in the same control
               system. In some robotic applications, control inputs are computed using the desired outputs and accurate
               dynamic models for the robots, and the computed inputs are used for control purposes. This is the “inverse
               model” (or “inverse dynamics”) approach because the input is computed using the output and a model
               (inverse model). In some literature, this method is also known as feedforward control. To avoid confusion,
               however, it is appropriate to denote this method as computed-input control.


               Since the overall response of a plant (e.g., a robot) depends on its individual modes, it should be possible to
               control a robot by controlling its modes. This is the basis of modal control. A mode is determined by the
               corresponding eigenvalue and eigenvector. In view of this, a popular approach of modal control is the pole
               placement or pole assignment. In this method of controller design, the objective is to select a feedback
               controller that will make the poles of the closed-loop system take up a set of desired values. This approach
               uses a “linearized” model of the robot.


               As we saw, a robot can be controlled using a feedback control law so as to satisfy some performance
               requirements. In optimal control, the objective is to optimize a suitable objective function (e.g., maximize a
               performance index or minimize a cost function) by using an appropriate feedback control law . A
                                                                                                      [14]
               particularly favorite performance index is the infinite-time quadratic integral of the state variables and input
               variables, and popular control law is linear constant-gain feedback of the system states. The associated
               controller is known as the linear quadratic regulator (LQR). Linear Quadratic Gaussian (LQG) Control is an
               optimal control technique that is intended for a linear system with random input disturbances and output
               (measurement) noise. An LQR controller together with a Kalman filter is used in this approach.

               For servo control to be effective, nonlinearities and dynamic coupling of the robot must be compensated
               faster than the control bandwidth at the servo level. One way of accomplishing this is by implementing a
               linearizing and decoupling controller inside the servo loops. This technique is termed feedback linearization
               technique.


               An adaptive control system is a feedback control system in which the values of some or all of the controller
               parameters are modified (adapted) during the system operation (in real-time) on the basis of some
               performance measure when the response (output) requirements are not satisfied. Many criteria can be
               employed for modifying the parameter values of a controller. Self-tuning control falls into the same
               category. Model identification or estimation may be required for adaptive control, which may be considered
               to be a preliminary step of “learning”. A neural network may be used for this purpose. In a learning system,
               control decisions are made using the cumulative experience and knowledge gained over a period of time.
               Furthermore, the definition of learning implies that a learning controller will “remember” and improve its
               performance with time. This is an evolutionary process that is true for intelligent controllers but not
               generally for adaptive controllers. In model-referenced adaptive control, the same reference input that is
               applied to the physical system is applied to a reference model as well. The difference between the response
               of the physical system and the output from the reference model is the error. The ideal objective is to make
               this error zero at all times. Then the system will perform just like the reference model. The error signal is
               used by the adaptation mechanism to determine the necessary modifications to the values of the controller
               parameters in order to achieve this objective.
   14   15   16   17   18   19   20   21   22   23   24