Page 102 - Read Online
P. 102

Page 97                                                                  Li et al. Intell Robot 2021;1(1):84-98  I http://dx.doi.org/10.20517/ir.2021.06


               dataset and can decrease the domain gap between different datasets. In future work, how to further optimize
               the whole system will be considered.



               DECLARATIONS
               Authors’ contributions
               Made substantial contributions to conception and design of the study and performed data analysis, data ac-
               quisition and interpretation: Li B
               Provided administrative, technical guidance and material support: Zhang H, Wang Z, Hu L

               Availability of data and materials
               Not applicable.


               Financial support and sponsorship
               This work is supported by the National Key R&D Program of China (2018YFB1305003), National Natural
               Science Foundation of China(61922063), and Shanghai Shuguang Project (18SG18).

               Conflicts of interest
               All authors declared that there are no conflicts of interest.

               Ethical approval and consent to participate
               Not applicable.

               Consent for publication
               Not applicable.


               Copyright
               © The Author(s) 2021.



               REFERENCES

               1.  Zhang K, Chen J, Li Y, Zhang X. Visual tracking and depth estimation of mobile robots without desired velocity information. IEEE
                  Trans Cybern 2018;50:361–73.
               2.  Xiao J, Stolkin R, Gao Y, Leonardis A. Robust fusion of color and depth data for RGB­D target tracking using adaptive range­invariant
                  depth models and spatio­temporal consistency constraints. IEEE Trans Cybern 2017;48:2485–99.
               3.  Gedik OS, Alatan AA. 3­D rigid body tracking using vision and depth sensors. IEEE Trans Cybern 2013;43:1395–405.
               4.  van der Sommen F, Zinger S,With P. Accurate biopsy­needle depth estimation in limited­angle tomography using multi­view geometry. In:
                  Medical Imaging 2016: Image­Guided Procedures, Robotic Interventions, and Modeling. vol. 9786. International Society for Optics and
                  Photonics; 2016. p. 97860D.
               5.  Eigen D, Puhrsch C, Fergus R. Depth map prediction from a single image using a multi­scale deep network. arXiv preprint arXiv:14062283
                  2014.
               6.  Chang Y, Jung C, Sun J. Joint reflection removal and depth estimation from a single image. IEEE Trans Cybern 2020.
               7.  Liu F, Shen C, Lin G. Deep convolutional neural fields for depth estimation from a single image. In: Proceedings of the IEEE
                  Conference on Computer Vision and Pattern Recognition; 2015. pp. 5162–70.
               8.  Laina I, Rupprecht C, Belagiannis V, Tombari F, Navab N. Deeper depth prediction with fully convolutional residual networks. In: 2016
                  Fourth International Conference on 3D Vision (3DV). IEEE; 2016. pp. 239–48.
               9.  Chen W, Fu Z, Yang D, Deng J. Single­image depth perception in the wild. Advances in Neural Information Processing Systems
                  2016;29:730–38.
               10. Kuznietsov Y, Stuckler J, Leibe B. Semi­supervised deep learning for monocular depth map prediction. In: Proceedings of the IEEE
                  Conference on Computer Vision and Pattern Recognition; 2017. pp. 6647–55.
               11. Garg R, Bg VK, Carneiro G, Reid I. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In: European
                  Conference on Computer Vsion. Springer; 2016. pp. 740–56.
   97   98   99   100   101   102   103   104   105   106   107