测绘学报 ›› 2023, Vol. 52 ›› Issue (9): 1515-1527.doi: 10.11947/j.AGCS.2023.20220417

• 摄影测量学与遥感 • 上一篇    下一篇

抗视差的宽基线弱纹理影像自动拼接算法

姚国标1,2, 黄鹏飞1, 龚健雅2, 孟飞1, 张进1   

  1. 1. 山东建筑大学测绘地理信息学院, 山东 济南 250100;
    2. 武汉大学遥感信息工程学院, 湖北 武汉 430079
  • 收稿日期:2022-07-04 修回日期:2023-03-29 发布日期:2023-10-12
  • 通讯作者: 龚健雅 E-mail:gongjy@whu.edu.cn
  • 作者简介:姚国标(1985-),男,博士,教授,硕士生导师,研究方向为遥感影像智能匹配。E-mail:yao7837005@sdjzu.edu.cn
  • 基金资助:
    国家自然科学基金(42171435);山东省自然科学基金(ZR2021MD006)

The automatic stitching algorithm with anti-parallax for wide-baseline weak-texture images

YAO Guobiao1,2, HUANG Pengfei1, GONG Jianya2, MENG Fei1, ZHANG Jin1   

  1. 1. College of Surveying and Geo-Informatics, Shandong Jianzhu University, Jinan 250100, China;
    2. School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
  • Received:2022-07-04 Revised:2023-03-29 Published:2023-10-12
  • Supported by:
    The National Natural Science Foundation of China (No. 42171435);The Natural Science Foundation of Shandong Province (No. ZR2021MD006)

摘要: 针对目前带有视差突变的宽基线弱纹理影像拼接效果差及需要人工干预的问题,本文从影像匹配和影像配准两方面进行改进,提出了面向宽基线弱纹理影像的抗视差全自动拼接算法。首先,采用融合了影像视角几何纠正的局部特征变换模型,由粗到精地实现弱纹理特征的准密集对应;然后,基于匹配点和深度神经网络,泛化学习宽基线影像间的可靠透视变换,以获取全局配准视差,局部视差则通过薄板样条函数来精确拟合;接着,将影像拼接结果的多边形边界进行规则化处理,通过全卷积网络将其训练为规则化矩形,在有效剔除空白区域的同时,最大限度地保留影像拼接内容;最后,选取4组无人机和地面近景宽基线弱纹理立体像对进行测试,并将本文算法的影像匹配及配准各阶段结果分别与现有代表性算法结果进行对比。试验结果表明,本文算法在匹配点数目、匹配精度及影像拼接质量等方面具有显著优势,并能够在影像弱纹理区域及视差突变场景表现出较好的稳定性。

关键词: 影像匹配, 视差突变, 宽基线弱纹理影像, 深度神经网络, 自动拼接

Abstract: Based on the available algorithm, it is a tough work to achieve stitching of wide-baseline weak-texture images with parallax discontinuity. As a result, the stitching task usually requires manual intervention. For this, we modify the critical steps of image matching and image registration, and propose an anti-parallax automatic stitching algorithm for wide-baseline weak-texture images. First, we obtain the quasi-dense correspondence of weak-texture features from coarse to fine, based on the local feature transformers model incorporating the geometric correction of the image perspective. Next, based on matching points and deep neural network (DNN), the reliable perspective transform between wide-baseline images can be learned to eliminate global registration disparity, and then the local left disparities are precisely fitted by thin plate spline (TPS) function. Furthermore, the polygon boundary of the image stitching result is regularized, and it is trained as a regularized rectangle through a fully convolutional network, which effectively removes the blank area and preserves the content of the image stitching to the maximum extent. Finally, four groups of UAV and ground close-range wide-baseline stereo image pairs with weak-textures are selected and tested, and the results of image matching and registration stages of our method are respectively compared with the results of the existing representative algorithms. The experimental results verify that our method has significant advantages in the number of matching points, accuracy and stitching quality, and show good stability at the weak-texture and parallax discontinuity regions of the images.

Key words: image matching, disparity discontinuity, wide-baseline weak-texture images, deep neural network, automatic stitching

中图分类号: