Acta Geodaetica et Cartographica Sinica ›› 2025, Vol. 54 ›› Issue (7): 1230-1242.doi: 10.11947/j.AGCS.2025.20240485

• Photogrammetry and Remote Sensing • Previous Articles     Next Articles

DRformer: a progressive coupled multiscale CNN and condensed attention Transformer method for hyperspectral image super-resolution

Qing CHENG(), Boxuan WANG, Hongyan ZHANG()   

  1. School of Computer Science, China University of Geosciences, Wuhan 430074, China
  • Received:2024-12-03 Revised:2025-07-01 Online:2025-08-18 Published:2025-08-18
  • Contact: Hongyan ZHANG E-mail:qingcheng@whu.edu.cn;zhanghongyan@cug.edu.cn
  • About author:CHENG Qing (1987—), female, PhD, researcher, PhD supervisor, majors in remote sensing information processing and applications. E-mail: qingcheng@whu.edu.cn
  • Supported by:
    The National Key Research and Development Program of China(2022YFB3903605);The National Natural Science Foundation of China(42171383);Natural Science Foundation of Wuhan(2024040801020278)

Abstract:

The super-resolution technology of hyperspectral image aims to enhance the spatial detail and quality of low-resolution hyperspectral images for better applications in areas such as environmental monitoring. In recent years, machine learning techniques based on deep convolutional neural networks have made significant progress in single hyperspectral image super-resolution. However, challenges remain in balancing the learning of spatial multi-scale local features and global detail features. This paper presents a fusion network, DRformer, that integrates convolutional neural networks and Transformer architecture using a progressive sampling strategy. The network employs a multi-scale adaptive weighted spectral attention module for local feature extraction and selective emphasis of spectral information, followed by an initial upsampling. Subsequently, a CADR module based on the Transformer architecture is incorporated after a second upsampling to process global image features and enhance effective information. To verify the effectiveness and robustness of the network, experiments were conducted on the Chikusei and Houston2013 datasets. The results demonstrate that DRformer outperforms existing deep learning methods, including GDRRN, SSPSR, EUNet and MSDformer in terms of super-resolution performance. Additionally, ablation experiments were carried out to validate the effectiveness of each module in the network.

Key words: hyperspectral image, super-resolution, Transformer, attention mechanism

CLC Number: