Page 74 - Read Online
P. 74
Page 380 Zhang et al. Intell Robot 2022;2(4):37190 I http://dx.doi.org/10.20517/ir.2022.26
Then, the aforementioned optimization problem is converted into the minimization of the upper bound of the
infinite-horizon objective function:
min (27)
s.t. (25).
A state feedback law ( ) = ( ) is applied to minimize the upper bound of the performance functions J
∑
and = ( ) , where = . Considering the inequality [Equation (24)] yields
=1
[ ]
( )
( ) ( ) + Δ ( ) + ( ) ∗ − + + ( )
( )
+ ( ) ( ) + Δ ( ) + ( ) ( ( ) + Δ ( )) ∗ ( ) (28)
( )
+ ( ) ( ( ) + Δ ( )) ( ) + Δ ( ) + ( ) ( )
[ ]
+ ( ) ( ( ) + Δ ( )) ∗ − ( ) ≤ 0,
which is equivalent to
[ ] [ ] [ ]
( ) Π 1 Π 2 ( )
≤ 0, (29)
( ) ∗ Π 3 ( )
[ ]
Π 1 = ( ) + Δ ( ) + ( ) ∗ − + +
[ ] (30)
Π 2 = ( ) + Δ ( ) + ( ) [ ( ) + Δ ( )]
Π 3 = [ ( ) + Δ ( )] ∗ − .
is an identity matrix with appropriate dimensions. The inequality [Equation (28)] can be guaranteed as long
as
[ ]
Π 1 Π 2
≤ 0 (31)
∗ Π 3
holds.
Here, we introduce a lemma for use in the following sections.
Lemma 3.3 [37] : For matrices Γ, and with appropriate dimensions and Γ = Γ, the following inequality
Γ + ( ) + ( ) ≤ 0 (32)
holds for all ( ) ( ) ≤ 1 if and only if there is a positive scalar such that
1
Γ + + ≤ 0. (33)
By Lemma 3.3, the following sufficient condition can be derived to guarantee the inequality [Equation (31)]:
( )
− 0 + ( ) ( ) ( 1 ) 0
−
∗ 0 0 0 ( 2 )
∗ ∗ ( 1 + 2 ) − 0 0 0 0
1 2
− ≤ 0, = 1, 2, (34)
∗ ∗ ∗ 0 0 0
∗ ∗ ∗ ∗ − 0 0
∗ ∗ ∗ ∗ ∗ − 0
∗ ∗ ∗ ∗ ∗ ∗ −
where = .