[1] LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation [C]// Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Boston, MA,USA:IEEE, 2015:3431-3440. [2] 李道纪, 郭海涛, 卢俊, 等. 遥感影像地物分类多注意力融和U型网络法[J]. 测绘学报, 2020, 49(8): 1051-1064.DOI: 10.11947/j.AGCS.2020.20190407. LI Daoji, GUO Haitao, LU Jun, et al. A remote sensing image classification procedure based on multilevel attention fusion U-Net[J]. Acta Geodaetica et Cartographica Sinica, 2020, 49(8): 1051-1064.DOI: 10.11947/j.AGCS.2020.20190407. [3] HUANG Bo, ZHAO Bei, SONG Yimeng. Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery[J]. Remote Sensing of Environment, 2018, 214: 73-86. [4] 党宇, 张继贤, 邓喀中, 等. 基于深度学习AlexNet的遥感影像地表覆盖分类评价研究[J]. 地球信息科学学报, 2017, 19(11): 1530-1537. DANG Yu, ZHANG Jixian, DENG Kazhong, et al. Study on the evaluation of land cover classification using remote sensing images based on AlexNet[J]. Journal of Geo-Information Science, 2017, 19(11): 1530-1537. [5] 杨军,于茜子.结合空洞卷积的FuseNet变体网络高分辨率遥感影像语义分割[J].武汉大学学报(信息科学版),2022,47(7):1071-1080. YANG Jun, YU Xizi. Semantic segmentation of high-resolution remote sensing images based on improved FuseNet combined with atrous convolution[J]. Geomatics and Information Science of Wuhan University, 2022, 47(7): 1071-1080. [6] HAZIRBAS C, MA L, DOMOKOS C, et al. Fusenet: incorporating depth into semantic segmentation via fusion-based CNN architecture[C]//Proceedings of 2016 Asian Conference on Computer Vision. Cham,Gernamy:Springer,2016: 213-228. [7] AUDEBERT N, SAUX B L, LEFÈVRE S. Semantic segmentation of earth observation data using multimodal and multi-scale deep networks[C]//Proceedings of 2016 Asian Conference on Computer Vision. Cham,Gernamy:Springer,2016:180-196. [8] ZUO Zongcheng, ZHANG Wen, ZHANG Dongying. A remote sensing image semantic segmentation method by combining deformable convolution with conditional random fields[J]. Journal of Geodesy and Geoinformation Science, 2020, 3(3): 39-49. [9] INTERDONATO R, IENCO D, GAETANO R, et al. DuPLO: a DUal view point deep learning architecture for time series classification[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2019, 149: 91-104. [10] HU Ting, HUANG Xin, LI Jiayi, et al. A semi-supervised approach towards land cover mapping with Sentinel-2 desnse time-series imagery[C]//Proceedings of 2019 IEEE International Geoscience and Remote Sensing Symposium. Yokohama, Japan: IEEE, 2019: 2423-2426. [11] BREIMAN L. Random forests[J].Machine learning, 2001, 45(1): 5-32. [12] 冯文卿, 眭海刚, 涂继辉, 等. 高分辨率遥感影像的随机森林变化检测方法[J]. 测绘学报, 2017, 46(11): 1880-1890.DOI: 10.11947/j.AGCS.2017.20170074. FENG Wenqing, SUI Haigang, TU Jihui, et al. Change detection method for high resolution remote sensing images using random forest[J]. Acta Geodaetica et Cartographica Sinica, 2017, 46(11): 1880-1890.DOI: 10.11947/j.AGCS.2017.20170074. [13] HASTIE T, ROSSET S, ZHU J, et al. Multi-class AdaBoost[J]. Statistics and its Interface, 2009, 2(3): 349-360. [14] CAI T T, ZHOU W X. Matrix completion via max-norm constrained optimization[J]. Electronic Journal of Statistics, 2016, 10(1): 1493-1525 [15] SUN He, REN Jinchang, ZHAO Huimin, et al. Superpixel based feature specific sparse representation for spectral-spatial classification of hyperspectral images[J]. Remote Sensing, 2019, 11(5): 536. [16] HONG Danfeng, YOKOYA N, XIA Guisong, et al. X-ModalNet: a semi-supervised deep cross-modal network for classification of remote sensing data[J].ISPRS Journal of Photogrammetry and Remote Sensing,2020, 167: 12-23. [17] ZHANG B, ZHANG Y, LI Y, et al. Semi-supervised semantic segmentation network via learning consistency for remote sensing land-cover classification[J]. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020,2:609-615. [18] 耿艳磊, 陶超, 沈靖, 等. 高分辨率遥感影像语义分割的半监督全卷积网络法[J]. 测绘学报, 2020, 49(4): 499-508.DOI: 10.11947/j.AGCS.2020.20190044. GENG Yanlei, TAO Chao, SHEN Jing, et al. High-resolution remote sensing image semantic segmentation based on semi-supervised full convolution network method[J]. Acta Geodaetica et Cartographica Sinica, 2020, 49(4): 499-508.DOI: 10.11947/j.AGCS.2020.20190044. [19] OUALI Y, HUDELOT C, TAMI M. Semi-supervised semantic segmentation with cross-consistency training[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA: IEEE,2020: 12671-12681. [20] ZHOU Jingchun, HAO Mingliang, ZHANG Dehuan, et al. Fusion PSPnet image segmentation based method for multi-focus image fusion[J]. IEEE Photonics Journal, 2019, 11(6): 1-12. [21] BOGUSZEWSKI A, BATORSKI D, ZIEMBA-JANKOWSKA N, et al. LandCover.ai: dataset for automatic mapping of buildings, woodlands, water and roads from aerial imagery[C]//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Nashville, TN, USA:IEEE,2021: 1102-1110. [22] GONG Han, COSKER D. Interactive removal and ground truth for difficult shadow scenes[J]. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 2016, 33(9): 1798-1811. [23] CHEN L C, ZHU Yukun, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation [C]//Proceedings of the 15th European Conference on Computer Vision(ECCV). Munich, Germany: Springer, 2018, 833-851. [24] RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation [C]//Proceedings of 2015 International Conference on Medical Image Computing and Computer-assisted Intervention. Munich, Germany:Springer,2015: 234-241. [25] 赵斌, 王春平, 付强, 等. 基于深度注意力机制的多尺度红外行人检测[J]. 光学学报, 2020, 40(5): 47-58. ZHAO Bin, WANG Chunping, FU Qiang, et al. Multi-scale infrared pedestrian detection based on deep attention mechanism[J]. Acta Optica Sinica, 2020, 40(5): 47-58. [26] BLAGA B C Z, NEDEVSCHI S. A critical evaluation of aerial datasets for semantic segmentation[C]//Proceedings of the 16th IEEE International Conference on Intelligent Computer Communication and Processing(ICCP). Cluj-Napoca, Romania: IEEE, 2020:3-5. |