测绘学报 ›› 2020, Vol. 49 ›› Issue (11): 1473-1484.doi: 10.11947/j.AGCS.2020.20190439

• 摄影测量学与遥感 • 上一篇    下一篇

多时相遥感影像语义分割色彩一致性对抗网络

李雪, 张力, 王庆栋, 艾海滨   

  1. 中国测绘科学研究院, 北京 100830
  • 收稿日期:2019-10-28 修回日期:2020-07-07 发布日期:2020-11-25
  • 通讯作者: 张力 E-mail:zhangl@casm.ac.cn
  • 作者简介:李雪(1993-),女,硕士生,研究方向为遥感图像智能解译。E-mail:amberlixue1229@163.com
  • 基金资助:
    国家重点研发计划(2019YFB1405600);中国测绘科学研究院基本科研业务项目(AR1902)

Multi-temporal remote sensing imagery semantic segmentation color consistency adversarial network

LI Xue, ZHANG Li, WANG Qingdong, AI Haibin   

  1. Chinese Academy of Surveying and Mapping, Beijing 100830, China
  • Received:2019-10-28 Revised:2020-07-07 Published:2020-11-25
  • Supported by:
    The National Key Research and Development Project (No. 2019YFB1405600);The Basic Scientific Research Project of Chinese Academy of Surveying and Mapping (No. AR1902)

摘要: 利用深度卷积神经网络智能化地提取遥感图像中的建筑物对于数字城市构建、灾害侦查、土地管理等具有重要意义。多时相遥感图像之间的色彩差异会导致建筑物语义分割模型泛化能力下降。针对此,本文提出了注意力引导的色彩一致生成对抗网络(attention-guided color consistency adversarial network,ACGAN)。该算法以参考色彩风格图像及相同区域、不同时相的待纠正图像作为训练集,采用加入了U型注意力机制的循环一致生成对抗网络训练得到色彩一致模型。在预测阶段,该模型将待纠正图像的色调转换为参考色彩风格图像的色调,这一阶段基于深度学习模型的推理能力,而不再需要相应的参考色彩风格图像。为了验证算法的有效性,首先,将本文算法与传统的图像处理算法及其他循环一致生成对抗网络做了对比试验。结果表明,ACGAN色彩一致后的图像与参考色彩风格图像的色调更加相似。其次,将以上不同的色彩一致性算法处理后的结果图像进行建筑物语义分割试验,证明本文方法更加有利于多时相遥感图像语义分割模型泛化能力的提升。

关键词: 多时相遥感图像, 色彩一致性, 生成对抗网络, 注意力机制, 语义分割

Abstract: Using deep convolutional neural network (CNN) to intelligently extract buildings from remote sensing images is of great significance for digital city construction, disaster detection and land management. The color difference between multi-temporal remote sensing images will lead to the decrease of generalization ability of building semantic segmentation model. In view of this, this paper proposes the attention-guided color consistency adversarial network (ACGAN). The algorithm takes the reference color style images and the images to be corrected in the same area and different phases as the training set and adopts the consistency adversarial network with the U-shaped attention mechanism to train the color consistency model. In the prediction stage, this model converts the hue of the images to that of the reference color style image, which is based on the reasoning ability of the deep learning model, instead of the corresponding reference color style image. This model transforms the hue of the images to be corrected into that of the reference color style images. This stage is based on the reasoning ability of the deep learning model, and the corresponding reference color style image is no longer needed. In order to verify the effectiveness of the algorithm, firstly, we compare the algorithm of this paper with the traditional image processing algorithm and other consistency adversarial network. The results show that the images after ACGAN color consistency processing are more similar to that of the reference color style images. Secondly, we carried out the building semantic segmentation experiment on the images processed by the above different color consistency algorithms, which proved that the method in this paper is more conducive to the impro-vement of the generalization ability of multi-temporal remote sensing image semantic segmentation model.

Key words: multi-temporal remote sensing imagery, color consistency, generative adversarial networks, semantic segmentation, attention mechanism

中图分类号: