Acta Geodaetica et Cartographica Sinica ›› 2022, Vol. 51 ›› Issue (6): 873-884.doi: 10.11947/j.AGCS.2022.20220106

• Academician Forum • Previous Articles     Next Articles

Deep learning interpretability analysis methods in image interpretation

GONG Jianya1,2, HUAN Linxi1, ZHENG Xianwei1   

  1. 1. State Key Laboratory of Information Engineering in Surveying, Mapping and Remoto Sensing, Wuhan University, Wuhan 430079, China;
    2. School of Remote Sensing and Engineering, Wuhan University, Wuhan 430079, China
  • Received:2022-02-18 Revised:2022-04-17 Published:2022-07-02
  • Supported by:
    The National Natural Science Foundation of China (Nos. 42090010;42071370)

Abstract: The rapid development of deep learning has greatly improved the performance of various computer vision tasks. However, the "black box" nature of deep learning network models makes it difficult for users to understand its decision-making mechanism, which is not conductive to model structure optimization and security enhancement and also greatly increases the training cost. Focusing on the task of intelligent image interpretation, this paper makes a comprehensive review and comparison of the research progress of deep learning interpretability. Firstly, we group the current interpretability analysis methods into six categories: activation maximization method, surrogate model, attribution method, perturbation-based method, class activation map based method and example-based method, and review the principle, focus, advantages, and disadvantages of existing related works. Secondly, we introduce eight evaluation metrics that measure the reliability of the explanations provided by the various interpretability analysis methods, and sort out the current publicly available open source libraries for deep learning interpretability analysis. Based on the open source library, we verify the applicability of the current deep learning interpretability analysis methods to the interpretation of remote sensing images. The experimental results show that the current interpretability methods are applicable to the analysis of remote sensing interpretation, but have certain limitations. Finally, we summarize the open challenges of using existing interpretability algorithms for remote sensing data analysis, and look forward to the prospect of designing interpretability analysis methods oriented to remote sensing images. We hope this review can promote the research on interpretability methods for remote sensing image interpretation, so as to provide reliable theoretical support and algorithm design guidance for the application of deep learning technology in remote sensing image interpretation tasks.

Key words: artificial intelligence, deep learning, remote sensing interpretation, interpretability, review

CLC Number: