[1] 权美香,朴松昊,李国.视觉SLAM综述[J].智能系统学报,2016,11(6):768-776. QUAN Meixiang, PIAO Songhao, LI Guo. An overview of visual SLAM[J].CAAI Transactions on Intelligent Systems, 2016,11(6): 768-776. [2] 高翔,张涛,刘毅.视觉SLAM十四讲:从理论到实践[M].北京:电子工业出版社,2017. GAO Xiang, ZHANG Tao, LIU Yi. 14 Lectures on Visual SLAM: From Theory to Practice [M].Beijing: Publishing House of Electronics industry, 2017. [3] FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: Fast semi-direct monocular visual odometry[C]//Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA). Hong Kong, China: IEEE, 2014: 15-22. [4] 杨梦佳.基于惯导与双目视觉融合的SLAM技术研究[D].西安:西安科技大学,2020. YANG Mengjia. Research on SLAM technology based on inertial navigation and binocular vision fusion[D]. Xi’an: Xi’an University of Science and Technology,2020. [5] 唐令. 基于半直接法的单目视觉里程计设计与实现[D].重庆:重庆大学,2018. TANG Ling. Design and implementation of semi-direct based monocular visual odometry[D]. Chongqing: Chongqing University, 2018. [6] MUR-ARTAL R, TARDÓS J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262. [7] SCHLEGEL D, COLOSI M, GRISETTI G. ProSLAM: graph SLAM from a programmers perspective[C]//Proceedings of 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane, QLD, Australia:IEEE, 2018: 3833-3840. [8] SHINYA Sumikura, MIKIYA Shibuya, KEN Sakurada. OpenVSLAM: a versatile visual SLAM framework[C]//Proceedings of the 27th ACM International Conference on Multimedia.New York, USA:ACM:2019. [9] 邸凯昌,万文辉,赵红颖,等.视觉SLAM技术的进展与应用[J].测绘学报,2018,47(6):770-779. DOI:10.11947/j.AGCS.2018.20170652. DI Kaichang, WAN Wenhui, ZHAO Hongying, et al. Progress and applications of visual SLAM[J]. Acta Geodaetica et Cartographica Sinica, 2018, 47(6): 770-779. DOI:10.11947/j.AGCS.2018.20170652. [10] ENGEL J, SCHÖPS T, CREMERS D. LSD-SLAM: large-scale direct monocular SLAM[M]//Proceedings of Computer Vision-ECCV 2014. Cham: Springer International Publishing, 2014: 834-849. [11] ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3): 611-625. [12] KERL C, STURM J, CREMERS D. Dense visual SLAM for RGB-D cameras[C]//Proceedings of 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo, Japan: IEEE, 2013: 2100-2106. [13] MELBOUCI K, COLLETTE S N, GAY-BELLILE V, et al. Model based RGBD SLAM[C]//Proceedings of 2016 IEEE International Conference on Image Processing (ICIP). Phoenix, AZ, USA: IEEE, 2016: 2618-2622. [14] 程传奇,郝向阳,李建胜, 等.基于非线性优化的单目视觉/惯性组合导航算法[J].中国惯性技术学报, 2017, 25(5): 643-649. CHENG Chuanqi, HAO Xiangyang, LI Jiansheng, et al. Monocular visual inertial integrated navigation algorithm based on nonlinear optimization[J].Journal of Chinese Inertial Technology, 2017,25(5):643-649. [15] 李丰阳,贾学东,董明.惯性/视觉组合导航在不同应用场景的发展[J].导航定位学报,2016,4(4):30-35. LI Fengyang, JIA Xuedong, DONG Ming. Development of vision/inertial integrated navigation in different application scenarios[J]. Journal of Navigation and Positioning, 2016, 4(4): 30-35. [16] MOURIKIS A I, ROUMELIOTIS S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]//Proceedings of 2007 IEEE International Conference on Robotics and Automation. Rome, Italy: IEEE, 2007: 3565-3572. [17] BLOESCH M, BURRI M, OMARI S, et al. Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback[J]. The International Journal of Robotics Research, 2017, 36(10): 1053-1072. [18] LEUTENEGGER S, FURGALE P, RABAUD V, et al. Keyframe-based visual-inertial SLAM using nonlinear optimization[C]//Proceedings of 2013 Robotics: Science and Systems. Berlin, Germany: Science and Systems Foundation:2013. [19] Robotics. Findings from Shanghai Jiao Tong University reveals new findings on robotics (structvio: visual-inertial odometry with structural regularity of man-made environments) [J]. Journal of Robotics & Machine Learning, 2019, 999-1013. [20] QIN Tong, LI Peiliang,SHEN Shaojie. VINS-mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018,34(4):1004-1020. [21] HARRIS C,STEPHENS M.A combined corner and edge detector[C]// Proceedings of 1988 Alvey Vision Conference 1988.Manchester. [S.l]:AlveyVision Club,1988. [22] LEUTENEGGER S,CHLI M,SIEGWART R Y.BRISK: Binary Robust invariant scalable keypoints[C]//Proceedings of 2011 IEEE International Conference on Computer Vision. Barcelona, Spain: IEEE, 2011: 2548-2555. [23] 李虎民. 融合直接法和特征法的单目SLAM技术研究[D]. 西安:西安电子科技大学2019. LI Humin. Research on monocular SLAM technology integrating direct method and indirect method [D].Xi’an: Xidian University,2019. [24] 程俊廷,郭博洋,田宽.改进的LK光流法在SLAM中的应用[J].黑龙江科技大学学报, 2019, 29(6):736-740. CHENG Junting, GUO Boyang, TIAN Kuan. Application of improved LK optical flow method in SLAM[J]. Journal of Heilongjiang University of Science and Technology, 2019, 29(6): 736-740. [25] BURRI M, NIKOLIC J, GOHL P, et al. The EuRoC micro aerial vehicle datasets[J]. The International Journal of Robotics Research, 2016, 35(10):1157-1163. |