测绘学报 ›› 2019, Vol. 48 ›› Issue (4): 460-472.doi: 10.11947/j.AGCS.2019.20180429
乌萌1,2,3, 郝金明1, 付浩4, 高扬2,3, 张辉5
收稿日期:
2018-09-18
修回日期:
2019-02-02
出版日期:
2019-04-20
发布日期:
2019-05-15
作者简介:
乌萌(1983-),女,博士生,工程师,研究方向为导航与位置服务。E-mail:wumeng19nudt@163.com
基金资助:
WU Meng1,2,3, HAO Jinming1, FU Hao4, GAO Yang2,3, ZHANG Hui5
Received:
2018-09-18
Revised:
2019-02-02
Online:
2019-04-20
Published:
2019-05-15
Supported by:
摘要: 针对地面移动测量系统(MMS)和无人驾驶车(AV)平台双目立体相机采集的图像序列进行实时载体位姿估计优化问题,提出利用光流运动场模型的载体位姿与图像光流矢量间关系,将光流矢量解耦为3个平移分量、3个旋转分量和一个深度分量,推导分析了解耦后单分量、组合分量误差对位姿估计的影响,利用仿真和真实数据试验,验证了不同模型下单分量、组合分量误差分离模型的有效性,并结合组合分量误差分离模型,提出了双目视觉里程计位姿估计的解耦光流运动场位姿优化算法。试验结果表明:该算法可在与初始估计几乎同等计算效率条件下,将载体横向平移平均误差由4.75%降低至2.2%,即横向平移误差平均降低了53.6%;将载体前向平移平均误差由2.2%降低至1.9%,即前向平移误差平均降低了15.4%,长时间运行累积误差率较低,能够满足低功耗高效率计算条件下的组合导航实时载体位姿估计需求。
中图分类号:
乌萌, 郝金明, 付浩, 高扬, 张辉. 利用解耦光流运动场模型的双目视觉里程计位姿优化方法[J]. 测绘学报, 2019, 48(4): 460-472.
WU Meng, HAO Jinming, FU Hao, GAO Yang, ZHANG Hui. A stereo visual odometry pose optimization method via flow-decoupled motion field model[J]. Acta Geodaetica et Cartographica Sinica, 2019, 48(4): 460-472.
[1] 高俊. 图到用时方恨少, 重绘河山待后生——《测绘学报》60年纪念与前瞻[J]. 测绘学报, 2017, 46(10):1219-1225. DOI:10.11947/j.AGCS.2017.20170503. GAO Jun. The 60 anniversary and prospect of acta geodaetica et cartographica sinica[J]. Acta Geodaetica et Cartographica Sinica, 2017, 46(10):1219-1225. DOI:10.11947/j.AGCS.2017.20170503. [2] 程传奇, 郝向阳, 李建胜, 等. 移动机器人视觉动态定位的稳健高斯混合模型[J]. 测绘学报, 2018, 47(11):1446-1456. DOI:10.11947/j.AGCS.2018.20170649. CHENG Chuanqi, HAO Xiangyang, LI Jiansheng, et al. Robust Gaussian mixture model for mobile robots' vision-based kinematical localization[J]. Acta Geodaetica et Cartographica Sinica, 2018, 47(11):1446-1456. DOI:10.11947/j.AGCS.2018.20170649. [3] 高翔, 张涛, 刘毅, 等. 视觉SLAM十四讲:从理论到实践[M]. 北京:电子工业出版社, 2017. GAO Xiang, ZHANG Tao, LIU Yi, et al. 14 lectures on visual SLAM:from theory to practice[M]. Beijing:Publishing House of Electronics Industry, 2017. [4] 周志华. 机器学习[M]. 北京:清华大学出版社, 2016. ZHOU Zhihua. Machine learning[M]. Beijing:Tsinghua University Press, 2016. [5] THRUN S, BURGARD W, FOX D. Probabilistic robotics[M]. Cambridge:The MIT Press, 2006. [6] HARTLEY R, ZISSERMAN A. Multiple view geometry in computer vision[M]. Cambridge:Cambridge University Press, 2004. [7] 张晓东. 可量测影像与GPS/IMU融合高精度定位定姿方法研究[D]. 郑州:信息工程大学, 2013. ZHANG Xiaodong. Research on high precision position and orientation method based on digital measurable image and GPS/IMU integration[D]. Zhengzhou:Information Engineering University, 2013. [8] 程传奇. 非结构场景下移动机器人自主导航关键技术研究[D]. 郑州:信息工程大学, 2018. CHENG Chuanqi. Research on the key technologies of autonomous navigation for mobile robots in unstructured environments[D]. Zhengzhou:Information Engineering University, 2018. [9] 陈驰, 杨必胜, 田茂, 等. 车载MMS激光点云与序列全景影像自动配准方法[J]. 测绘学报, 2018, 47(2):215-224. DOI:10.11947/j.AGCS.2018.20170520. CHEN Chi, YANG Bisheng, TIAN Mao, et al. Automatic registration of vehicle-borne mobile mapping laser point cloud and sequent panoramas[J]. Acta Geodaetica et Cartographica Sinica, 2018, 47(2):215-224. DOI:10.11947/j.AGCS.2018.20170520. [10] 魏崇阳. 城市环境中基于三维特征点云的建图与定位技术研究[D]. 长沙:国防科学技术大学, 2016. WEI Chongyang. 3D feature point clouds-based research on mapping and localization in urban environments[D]. Changsha:National University of Defense Technology, 2016. [11] NISTER D, NARODITSKY O, BERGEN J. Visual odometry[C]//Proceedings of 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, DC, USA:IEEE, 2004. [12] SCARAMUZZA D, FRAUNDORFER F. Visual odometry Part I:the first 30 years and fundamentals[J]. IEEE Robotics and Automation Magazine, 2011, 18(4):80-92. [13] MUR-ARTAL R, MONTIEL J M M, TARDÓS J D. ORB-SLAM:a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5):1147-1163. [14] FORSTER C, ZHANG Zichao, GASSNER M, et al. SVO:semidirect visual odometry for monocular and multicamera systems[J]. IEEE Transactions on Robotics, 2017, 33(2):249-265. [15] PIZZOLI M, FORSTER C, SCARAMUZZA D. REMODE:Probabilistic, monocular dense reconstruction in real time[C]//Proceedings of 2014 IEEE International Conference on Robotics and Automation. Hong Kong, China:IEEE, 2014:2609-2616. [16] NEWCOMBE R A, LOVEGROVE S J, DAVISON A J. DTAM:dense tracking and mapping in real-time[C]//Proceedings of the 2011 International Conference on Computer Vision. Barcelona, Spain:IEEE, 2011:2320-2327. [17] ENGEL J, SCHÖPS T, CREMERS D. LSD-SLAM:Large-scale direct monocular SLAM[C]//Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland:Springer, 2014:834-849. [18] ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3):611-625. [19] BADINO H, KANADE T. A head-wearable short-baseline stereo system for the simultaneous estimation of structure and motion[C]//Proceedings of the IAPR Conference on Machine Vision Application. Nara, Japan, 2011:185-189. [20] KITT B, GEIGER A, LATEGAHN H. Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme[C]//Proceedings of 2010 IEEE Intelligent Vehicles Symposium. San Diego, CA, USA:IEEE, 2010. [21] STEIN G P, MANO O, SHASHUA A. A robust method for computing vehicle ego-motion[C]//Proceedings of the 2000 IEEE Intelligent Vehicles Symposium Dearborn, MI, USA:IEEE, 2000. [22] SCARAMUZZA D, FRAUNDORFER F, SIEGWART R. Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC[C]//Proceedings of 2009 IEEE International Conference on Robotics and Automation. Kobe, Japan:IEEE, 2009. [23] BUCZKO M, WILLERT V. Flow-decoupled normalized reprojection error for visual odometry[C]//Proceedings of the 19th IEEE International Conference on Intelligent Transportation Systems. Rio de Janeiro, Brazil:IEEE, 2016:1161-1167. [24] GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA:IEEE, 2012. [25] 章毓晋. 计算机视觉教程[M]. 北京:人民邮电出版社, 2011. ZHANG Yujin. A course of computer vision[M]. Beijing:Posts & Telecom Press, 2011. [26] MALIK J. Dynamic perspective[EB/OL].[2015-05-16]. http://www-inst.eecs.berkeley.edu/~cs280/sp15/lectures/4.pdf. [27] SABATINI S, CORNO M, FIORENTI S, et al. Vision-based pole-like obstacle detection and localization for urban mobile robots[C]//Proceedings of 2018 IEEE Conference on Intelligent Vehicles Symposium. Changshu, China:IEEE, 2018. |
[1] | 程结海, 黄中意, 王建如, 何湜. 高空间分辨率遥感影像最优分割结果自动确定方法[J]. 测绘学报, 2022, 51(5): 658-667. |
[2] | 梁哲恒, 黎宵, 邓鹏, 盛森, 姜福泉. 融合多尺度特征注意力的遥感影像变化检测方法[J]. 测绘学报, 2022, 51(5): 668-676. |
[3] | 白坤, 慕晓冬, 陈雪冰, 朱永清, 尤轩昂. 融合半监督学习的无监督遥感影像场景分类[J]. 测绘学报, 2022, 51(5): 691-702. |
[4] | 黄明益, 吴军, 高炯笠. 多镜头全景摄像机球面视频无缝生成[J]. 测绘学报, 2022, 51(5): 703-717. |
[5] | 王丹菂, 邢帅, 徐青, 林雨准, 李鹏程. 单频机载激光测深海陆回波自动分类方法[J]. 测绘学报, 2022, 51(5): 750-761. |
[6] | 张志敏. 基于遥感反照率的青藏高原冰川年际物质平衡估算研究[J]. 测绘学报, 2022, 51(5): 781-781. |
[7] | 李永强, 李鹏鹏, 董亚涵, 范辉龙. 车载LiDAR点云数据中杆状地物自动提取与分类[J]. 测绘学报, 2020, 49(6): 724-735. |
[8] | 王竞雪, 刘肃艳, 王伟玺. 联合共线约束与匹配冗余的组直线匹配结果检核算法[J]. 测绘学报, 2020, 49(6): 746-756. |
[9] | 詹总谦, 胡孟琦, 满益云. 多尺度区域生长点云滤波地表拟合法[J]. 测绘学报, 2020, 49(6): 757-766. |
[10] | 韩斌, 吴一全. SAR图像河流提取的主动轮廓模型的稳健估计算法[J]. 测绘学报, 2020, 49(6): 777-786. |
[11] | 邓睿哲, 陈启浩, 陈奇, 刘修国. 遥感影像船舶检测的特征金字塔网络建模方法[J]. 测绘学报, 2020, 49(6): 787-797. |
[12] | 黄亮. 多时相遥感影像变化检测技术研究[J]. 测绘学报, 2020, 49(6): 801-801. |
[13] | 吴文豪, 张磊, 李陶, 龙四春, 段梦, 周志伟, 祝传广, 蒋廷臣. 基于几何配准的多模式SAR影像配准及其误差分析[J]. 测绘学报, 2019, 48(11): 1439-1451. |
[14] | 赵生银, 安如, 朱美如. 联合像元-深度-对象特征的遥感图像城市变化检测[J]. 测绘学报, 2019, 48(11): 1452-1463. |
[15] | 刘照欣, 赵辽英, 厉小润, 陈淑涵. 高光谱亚像元定位的线特征探测法[J]. 测绘学报, 2019, 48(11): 1464-1474. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||