Page 34 - Read Online
P. 34

Yang et al. Intell Robot 2022;2(3):223­43  I http://dx.doi.org/10.20517/ir.2022.19  Page 227


               2.2. Problem formulation
               Consider a linear discrete-time multi-agent system consist of one leader and    follower agents:

                                                  (   + 1) =         (  ) +         (  ),    = 1, 2, ...,   
                                                                                                       (1)
                                               0 (   + 1) =      0 (  )
               where       (  ) ∈ R ,       (  ) ∈ R ,    0 (  ) ∈ R are the state, input of the   th follower, the state of leader, respec-
                                          
                                  
                                                       
               tively. Matrices    ∈ R       ×       and    ∈ R       ×    represent known constant system matrices.
               Intherealworld, thecommunicationnetworktopologyamongagentsismorelikelytobetime-varying. Inthis
                                                                                                        +
               paper, a switching signal   (  ) is used to characterize the topology switching among agents. {  (  ),    ∈ N }
               represents a discrete-time semi-Markov chain with values in a finite set O = {1, 2, ...,   }.

               To describe semi-Markov chain more formally, the following concepts are introduced. (I) The stochastic pro-
               cess {      ,    ∈ N } ∈ N is denoted as the mode index of the   th jump, in which taking values in O. (II) The
                                  +
                            +
               stochasticprocess {      ,    ∈ N } ∈ N representsthetimeinstantof atthe   th jump. (III) The stochastic process
                                       +
                                             +
               {      ,    ∈ N } ∈ N stands for the sojourn-time of mode      −1 between the (   −1)th jump and   th jump, where
                        +
                             +
                     =       −      −1.
               Definition 1 [39]  The stochastic process {(      ,       ),    ∈ N } is said to be a discrete-time homogeneous Markov
                                                             +
               renewal chain (MRC), if the following conditions holds for all   ,    ∈ O,    ∈ N ,    ∈ N :
                                                                                        +
                                                                                 +
                              {     +1 =   ,      +1 =   |   0 ,   1 , ...      =   ;    0 , ...      } =     {     +1 =   ,      +1 =   |      =   }
                                                                    =     {   1 =   ,    1 |   0 =   },
               where {      ,    ∈ N } is named as the embedded Markov chain (EMC) of MRC.
                              +

               Denote the matrix Π(  ) = [        (  )] ∈ R   ×    as the discrete-time semi-Markov kernel with

                                          (  ) =     {     +1 =   ,      +1 =   |      =   }
                                              {     +1 =   ,      =   }     {     +1 =   ,      +1 =   ,      =   }
                                        =                                                              (2)
                                                  {      =   }          +1 =   ,      =   
                                        =                 (  )


               where  ∑ ∞  ∑   ∈O         (  ) = 1 and 0 <         (  ) < 1 with         (0) = 0. The transition probability of EMC is
                        =0
               defined by         =     {     +1 =   |      =   }, ∀  ,    ∈ O with         = 0, and the probability density function of
               sojourn-time is provided by         (  ) =     {     +1 |     +1 =   ,      =   },         (0) = 0.

               Remark 1 References [35,37,38]  studied the leader-following consensus and containment control problems for
               multi-agent systems with semi-Markov switching topologies, respectively. A continuous-time semi-Markov
               jump process is employed to describe the switching of the topology. Accordingly, the probability density func-
               tion of the sojourn-time can only be of a fixed probability distribution type for the different modes. This limits
               its practical application. In this paper, a discrete semi-Markov chain is introduced to characterize topology
               switching among agents. The introduced probability density function of sojourn time depends on both the
               current mode and the next mode, so that different parameters of the same distribution or different types of
               probability distributions can coexist. Hence, the probability density function introduced in this paper is more
               applicable than that in the literature [35,37,38] .

               Definition 2 [39]  The stochastic process {  (  ),    ∈ N } is said to be an semi-Markov chain associated with
                                                            +
               MRC {(      ,       ),    ∈ N }, if   (  ) =    N(  ) , ∀   ∈ N , N(  ) = max{   ∈ N |      ≤   }.
                                  +
                                                                          +
                                                        +
   29   30   31   32   33   34   35   36   37   38   39