Page 90 - Read Online
P. 90

Liu et al. Intell Robot 2024;4(3):256-75  I http://dx.doi.org/10.20517/ir.2024.17   Page 274

               Furthermore, when the data scale increases exponentially, the runtime of CUDA-SLAM is still much lower
               than that of traditional methods. Enhancing real-time capabilities is crucial because real-time responsiveness
               is a key requirement in many real-world applications. In VSLAM system, loop closure detection is also an
               important part. It requires determining whether the robot has reached the previous position by comparing
               the current frame with the reference keyframe based on BOW model, where the similarity calculations are
               repetitive and independent operations that are feasible to accelerate. Therefore, the parallelization of loop clo-
               sure detection is considered to be implemented to improve the performance of VSLAM system in the future.





               DECLARATIONS
               Acknowledgments
               We are grateful for the efforts of our colleagues in the Sino-German Center of Intelligent Systems.

               Authors’ contributions
               Made substantial contributions to the research process and wrote the original draft: Shu Z, Liu Y, Hou C
               Performed data acquisition: Xu S, Lv T
               Provided guidance and support: Liu H, Dong Y

               Availability of data and materials
               Not applicable.

               Financial support and sponsorship
               This work was supported by the National Natural Science Foundation of China under Grant 61873189 and
               Grant62088101,theShanghaiMunicipalScienceandTechnologyMajorProjectunderGrant2021SHZDZX0100,
               and the 19th Experimental Teaching Reform Fund of Tongji University under Grant 0800104314.

               Conflicts of interest
               All authors declared that there are no conflicts of interest.

               Ethical approval and consent to participate
               Not applicable.

               Consent for publication
               Not applicable.

               Copyright
               © The Author(s) 2024.




               REFERENCES
               1.  Sharafutdinov D, Griguletskii M, Kopanev P, et al. Comparison of modern open-source visual SLAM approaches. J Intell Robot Syst
                  2023;107:43. DOI
               2.  Cai D, Li R, Hu Z, Lu J, Li S, Zhao Y. A comprehensive overview of core modules in visual SLAM framework. Neurocomputing
                  2024;590:127760. DOI
               3.  Deng T, Chen Y, Zhang L, et al. Compact 3d gaussian splatting for dense visual slam. arXiv. [Preprint.] Mar 17, 2024 [accessed 2024 Aug
                  27]. Available from: https://doi.org/10.48550/arXiv.2403.11247.
               4.  Matsuki H, Tateno K, Niemeyer M, Tombari F. Newton: neural view-centric mapping for on-the-fly large-scale slam. IEEE Robot Autom
                  Lett 2024;9:3704-11. DOI
               5.  Yang K, Cheng Y, Chen Z, Wang J. SLAM meets NeRF: a survey of implicit SLAM methods. World Electr Veh J 2024;15:85. DOI
               6.  Chen B, Zhong X, Xie H, et al. SLAM-RAMU: 3D LiDAR-IMU lifelong SLAM with relocalization and autonomous map updating for
                  accurate and reliable navigation. Ind Robot 2024;51:219-35. DOI
   85   86   87   88   89   90   91   92   93   94   95