Page 43 - Read Online
P. 43
Page 38 Harib et al. Intell Robot 2022;2(1):37-71 https://dx.doi.org/10.20517/ir.2021.19
1. INTRODUCTION
By running a numerical model of a robotic mechanism and its interactions with surroundings, one can
define a control algorithm that delivers torque (input) signals to actuators, and that is how a mechanism is
able to anticipate the movement. Since robotic systems are extremely nonlinear, the control design is usually
a hard step. Figure 1 illustrates a simplified representation of a two-link robot manipulator. Given a system
of (dynamic) equation of a robotic system, it contains variables that change when the robot is in motion,
which alters the equation mid-task. In this case, a traditional control technique will have to divide the
nonlinear mechanism into linear subsystems, which are reasonable for low-speed actions; however, with a
high-speed system, their efficacy becomes close to none. For these reasons, adaptive control strategies were
first considered.
The system defined by a robot and its controller is complete. Since reconfigurations of the robotic
mechanism are needed due to the functional requirements changes, the controller has to adapt to these
reconfigurations. In comparison to a non-adaptive control, the adaptive control is able to function without
relying on the prior data from the system, since it constantly changes and adjusts to the altered states. That
is specifically what makes adaptive control “almost perfect” for systems with unpredictable surroundings,
with many probable interferences that could change the system parameters anytime.
[1-5]
In the early years, there were many interests in research and books about the adaptive control that
considered continuous-time systems in most cases. Since 1970, researchers have started dealing with the
[6-8]
realization of adaptive control in digital systems. Multiple surveys show that the consideration of
adaptive control systems with discrete-time signals has been around for a while. Many applications of the
general adaptive control have been made afterward. There are two fundamental approaches within the
adaptive control theory. The first approach is called Learning Model Adaptive Control, where we find the
well-known self-tuning adaptive control technique. This approach consists of an improved model of the
plant obtained by on-line parameter estimation techniques, and then used in the feedback control. The
second approach is called Model Reference Adaptive Control (MRAC). In this case, the controller is
adjusted so that the behaviors of the closed-loop system and the preselected model match according to some
criterion .
[9]
Due to the limitations of adaptive control when it comes to bounded disturbances, many researchers turned
to “Algorithm Modification” approaches in the 1980s. Typically, these approaches alter least squares
adaptation by putting bounds on the error, the parameters, or employing a first order modification of the
least squares type of adaptation algorithm. When the observed error is not attributable to an error in the
parameter estimations, these strategies effectively turn off or limit the effects of parameter adaptation. The
Algorithm Modification techniques essentially perform the same function as the input-output rule-based
approaches, but they attempt to have the adaptation algorithm monitor its own level of certainty. The
second section of this paper will present more details about the most famous modifications among control
researchers, such as Dead-zone modification, σ-modification, and ϵ-modification. Unfortunately, these
modifications often require a priori knowledge of bounds on the parameters and the perturbations and
[10]
noise . Furthermore, they often improve robustness at the expense of performance.
In a control engineering sense, AI and classical control-based approaches are just different sides of the same
coin. Therefore, the limitation of Adaptive control has driven many researchers to consider AI-based
controllers. In the 1990s, the field of neural networks was vastly investigated in general, and for control of
dynamical systems in particular. The control problem can be formulated as a machine learning (ML)
problem, and that is how ML can be mixed with control theory. One of the fundamentally new approaches