Page 37 - Read Online
P. 37

Page 186                           Chen et al. Intell Robot 2024;4:179-95  I http://dx.doi.org/10.20517/ir.2024.11


               Algorithm 1 Staircase feature extraction, point cloud registration, and camera motion trajectory estimation
                 1: function Stair_feature_extraction( Ground    2D)
                 2:              _       = RANSAC( Ground     2D )           ⊲ Identify the number of stairsteps
                 3:    Classify the staircase shape according to Figure 3 and identify the feature points    feature
                 4:    return    feature

                 5: function KNN_ICP(            _          ,             _          )
                 6:    Initialize the transformation matrix    as an identity matrix
                 7:    for          = 1 to                            do
                 8:        Find the nearest correspondences between             _           and             _          
                 9:        Update    using the least square method according to Equation 4
                10:        Apply    to             _          
                11:        Calculate the change Δ   in the displacement component of   
                12:        if Δ   ≤    th then
                13:            break
                14:    return   


                15:    = 1
                16: while 1 do
                17:       feature,t = Stair_feature_extraction( Ground    2D,  )
                18:       feature,t+1 = Stair_feature_extraction( Ground    2D,  +1)
                19:       = KNN_ICP(   feature,t ,    feature,t+1 )
                20:    Derive the transformation (      ,       ) and calculate           camera according to Equation 5
                21:       =    + 1


               estimated camera motion trajectory derived from our method and the ground truth trajectory recorded by
               the motion capture system (calculated by Equation 6, where    est,   and    gt,   are the estimated position and the
               ground truth position at timestep   , respectively, and    is the total number of timesteps).


                                                     √
                                                       ∑     (        ) 2
                                                           =1     est,   −    gt,  
                                                    =                                                   (6)
                                                                 
               As shown in Figure 4, a male subject was instructed to attach a Time-of-Flight (ToF) depth camera (pmd
               Camboard pico flexx2; the parameter of the camera is shown in Table 1) and an IMU (IM948, 150 Hz) above
               his knee. His task involved ascending stairs while wearing these devices for eight repeated trials. The width of
               the stairs is 28 cm, and the height of the stairs is 9 cm (the first step) and 12 cm (subsequent steps). Throughout
               the experiment, the sampling rate of the point cloud was set to 30 Hz. Data from IMU and the camera were
               acquiredintwothreads, andtheirapproximatesynchronizationwasachievedbycapturingandfusingthelatest
               data from both threads. In addition to this data, precise ground truth positional information for the camera
               was captured by the motion capture system (Raptor-12HS, Motion Analysis Corporation, USA) at a frequency
               of 120 Hz. The motion capture markers were also attached to the toe and heel of the subject to record the
               position of his foot, but this information was not utilized in this work. The ICP algorithm uses the extracted
               feature points to estimate the camera motion in the global coordinate system. The average time for feature
               extraction is ∼6 ms, while the KNN-ICP algorithm takes an average of ∼3 ms.

               The experiment results, as presented in Figure 5, indicate that the absolute trajectory error across all trials falls
               within the centimeter range. The outcomes reveal that the enhanced feature extraction method, as introduced
   32   33   34   35   36   37   38   39   40   41   42