Page 75 - Read Online
P. 75

Li et al. Intell Robot 2021;1(1):58-83  I http://dx.doi.org/10.20517/ir.2021.08       Page 70























                      Figure 9. Examples of target search tasks. A: the AUV R4 breaks down; B: final trajectories of the search process [83] .


               rule. The winner rule is not the shortest distance between target and AUV, whereas the winner rule becomes
               the maximum neural dynamic value in the neural activity values.

               3.3.2. Target search
               The fundamental problem of target search for multi-AUV search systems is how to control all the vehicles
               to search to their target along the optimized paths cooperatively. The initial work on search was carried out
               by simplifying the search problem as an area coverage problem. As same as in cleaning robot application, the
               landscapeofneuralactivitycanguidetherobottosearcheveryunknownareasuntilthetargetwassearched [82] .
               However, the coverage algorithm is not an efficient search algorithm as the robot power is wasted by unneces-
               sary visitingpositions. Inorderto improve the efficiency of the searchalgorithm, a sonar systemwas applied to
               extracttheinformationoftheenvironmenttobuildthemapandlocalizethetargetlocation [83] . Figure9shows
               that the proposed algorithm not only enabled the multi-AUV team to achieve search but also ensures a suc-
               cessful search if one or several AUVs fail. However, factors in real environments, such as ocean currents, were
               excluded in this simulation and there might be a waste of search capacity because of the overlapping search
               spaces. Same as the navigation application, with the consideration of ocean current, an integrated method
               based on the neurodynamics model and velocity synthesis algorithm was proposed for the cooperative search
               of the multi-AUV system [84] .


               3.3.3. Hunting
               Based on the previous study of neurodynamics model hunting for mobile robots in 2-D environments, a 3-
               D underwater environment hunting algorithm was proposed [85,86] . Compared with Ni and Yang’s hunting
               algorithm [56] , the catching stage was very different in applying underwater robots. The final hunting state can
               be divided into four situations. Figure 10 shows one of the hunt situations that four AUVs surrounded the
               target.


               The path conflict situation happened when multiple AUVs chose the same position to be the next movement.
               A collision-free rule was established that location information is recorded between each AUV and selects the
               next step grid in each vehicle in anticipation before the movement [87] . If any other AUV has occupied the grid,
               then choose another grid to move.



               4. CONTROL
               Robot control is ongoing research that tracks much attention. The control in robotics is to develop controllers
               that drive robot kinematics or dynamics to reach desired states. Intelligent control of the robot is to develop a
   70   71   72   73   74   75   76   77   78   79   80