Acta Geodaetica et Cartographica Sinica ›› 2025, Vol. 54 ›› Issue (11): 2009-2025.doi: 10.11947/j.AGCS.2025.20250199

• Photogrammetry and Remote Sensing • Previous Articles    

Remote sensing image scene classification method integrating spatial and semantic information of transferred features

Xi GONG1,2(), Zhanlong CHEN3,4,5, Hengqiang ZHENG1, Sheng HU6(), Hongyan ZHANG3   

  1. 1.School of Computer and Artificial Intelligence, Hubei University of Education, Wuhan 430205, China
    2.Hubei Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences, Wuhan 430078, China
    3.Department of Computer Science, China University of Geosciences, Wuhan 430074, China
    4.Key Laboratory of Geological Survey and Evaluation of Ministry of Education, China University of Geosciences, Wuhan 430074, China
    5.Engineering Research Center of Natural Resource Information Management and Digital Twin Engineering Software, Ministry of Education, Wuhan 430074, China
    6.Beidou Research Institute, South China Normal University, Foshan 528225, China
  • Received:2025-05-09 Revised:2025-09-27 Published:2025-12-15
  • Contact: Sheng HU E-mail:gongxi@hue.edu.cn;husheng@m.scnu.edu.cn
  • About author:GONG Xi (1992—), female, PhD, lecturer, majors in remote sensing and spatial data analysis. E-mail: gongxi@hue.edu.cn
  • Supported by:
    The National Key Research and Development Program of China(2022YFB3903605);The National Natural Science Foundation of China(42301495);MOE (Ministry of Education in China) Project of Humanities and Social Sciences(24YJC880047);Open Research Project of The Hubei Key Laboratory of Intelligent Geo-Information Processing(KLIGIP-2022-A02)

Abstract:

To address scene confusion and low classification accuracy caused by complex spatial distributions of ground objects in remote sensing (RS) scenes, a novel classification method integrating spatial and semantic information from transferred features of RS scenes is proposed. Leveraging the representation capabilities of different-level transferred features from a deep convolutional neural network for local detail and global semantic information, a deep spatial co-occurrence matrix is constructed to quantify the spatial co-occurrence patterns of local features, which are then fused with high-level semantic features. The resulting spatial-semantic joint feature synergistically represents scene spatial and semantic information, thereby enhancing recognition capability for complex RS scenes. Experiments on several RS scene classification datasets demonstrate the proposed method effectively discriminates complex and confusing scenes, showing advantages in spatial information representation and classification performance improvement.

Key words: remote sensing image, scene classification, transferred features, deep spatial co-occurrence matrix, spatial-semantic information fusion

CLC Number: