Loading...

Table of Content

    06 January 2025, Volume 53 Issue 12
    Intelligent Image Processing
    Road extraction networks fusing multiscale and edge features
    Genyun SUN, Chao SUN, Aizhu ZHANG
    2024, 53(12):  2233-2243.  doi:10.11947/j.AGCS.2024.20230291
    Asbtract ( )   HTML ( )   PDF (6974KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Extracting roads using remote sensing images is of great significance to urban development. However, due to factors such as variable scale of roads and easy to be obscured, it leads to problems such as road miss detection and incomplete edges. To address the above problems, this paper proposes a network (MeD-Net) for road extraction from remote sensing images integrating multi-scale features and focusing on edge detail features. MeD-Net consists of two parts: road segmentation and edge extraction. The road segmentation network uses multi-scale deep feature processing (MDFP) module to extract multi-scale features taking into account global and local information, and is trained using group normalization optimization model after convolution. The edge extraction network uses detail-guided fusion algorithms to enhance the detail information of deep edge features and uses attention mechanisms for feature fusion. To verify the algorithm performance, this paper conducts experiments using the Massachusetts road dataset and the GF-2 road dataset in Qingdao area. The experiments show that MeD-Net achieves the highest accuracy in both datasets in terms of intersection-over-union ratio and F1 value, and is able to extract roads at different scales and maintain road edges more completely.

    A high-resolution remote sensing images change detection method via the integration of dense connections and self-attention mechanisms
    Shiyan PANG, Jingjing HAO, Zhiqi ZUO, Jingjing LAN, Xiangyun HU
    2024, 53(12):  2244-2253.  doi:10.11947/j.AGCS.2024.20230454
    Asbtract ( )   HTML ( )   PDF (4682KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Remote sensing image change detection is an important task in remote sensing image analysis, which is widely used in urban dynamic monitoring, geographic information update, natural disaster monitoring, illegal building investigation, military target strike effect analysis, and land and resources survey. As a pixel-level prediction task, the current methods of change detection have two prominent problems: one is the computational efficiency of self-attention between arbitrary pixel pairs is low, and the long context information in remote sensing images is insufficiently utilized; the other is that the current methods focus on the extraction of deep change image features while shallow information containing high-resolution and fine-grained features are ignored. To address the first problem, Transformer is used to perform context modeling on the extracted bitemporal image features to improve the quality of the deepest change features. To take into account the efficiency of the Transformer, the proposed method converts the images into sparse tokens, thereby significantly reducing the number of tokens of the Transformer. For the second problem, the proposed method uses dense skip connections to retain high resolution in shallow change features. Three publicly available datasets were used for experiments. Extensive experiments show that compared with the state-of-the-art change detection methods, the IoU metric of the proposed method reached 85.44%, 84.15% and 94.61%, respectively, which is better than other comparison methods.

    Geodesy and Navigation
    Phase clock/bias estimation for GNSS all-frequency undifferenced ambiguity resolution
    Jianghui GENG, Jihang LIN, Qiyuan ZHANG, Qiang WEN, Jing ZENG, Biao JIN
    2024, 53(12):  2254-2267.  doi:10.11947/j.AGCS.2024.20230060
    Asbtract ( )   HTML ( )   PDF (5322KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Undifferenced ambiguity resolution is a crucial technology for GNSS precision point positioning (PPP). The conventional PPP method typically uses ionosphere-free combination observations on the specific baseline frequencies (e.g., GPS L1/L2). Therefore, the satellite clock and phase bias products required for ambiguity resolution (AR) are bounded to these specified observation models and signal frequencies, which significantly limits the user's choice. To meet the user's demand for high-precision positioning with free choice of signal frequencies, this paper proposes an estimation method of “all-frequency phase clock/bias”, which employs the undifferenced integer ambiguity constraints on the network processing to estimate phase clock and observable-specific signal bias (OSB) products. While ensuring their consistency with the datum of IGS (International GNSS Service) legacy clock and bias, these products enable the PPP ambiguity resolution applicable to arbitrary observation model and frequency choices. Both a 31-day test on 197 IGS MGEX (multi-GNSS experiment) stations and a kinematic test on car-borne data demonstrate that the GPS/Galileo/BDS all-frequency phase clock/bias product proposed by this paper maintains relatively consistent ambiguity resolution efficiency and static/kinematic positioning accuracy over any frequency combination in PPP-AR. In particular, we emphasize that the undifferenced integer ambiguity constraint applied to the network processing plays a vital role in guaranteeing the strict coupling of the satellite phase clock/bias products, which is essential to ensure the capability of all-frequency undifferenced ambiguity resolution. The all-frequency phase clock/bias products from Wuhan University (ftp://igs.gnsswhu.cn/pub/whu/phasebias/) have been released since 2023 as a rapid routine service.

    BDS-3/GNSS satellite ultra-rapid clock offsets estimation model with the aid of onboard clock states solution
    Chao HU, Qianxin WANG
    2024, 53(12):  2268-2281.  doi:10.11947/j.AGCS.2024.20240112
    Asbtract ( )   HTML ( )   PDF (6952KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The orbit and clock offset of GNSS are the prerequisite for the high-performance positioning, navigation and timing (PNT) services. To overcome the obvious problems of clock offsets estimation in multi-GNSS ultra-rapid orbit determination solution, such as the observation quality and model configurations, an improved model to estimate ultra-rapid clock offset is proposed by considering the BDS-3/GNSS satellite clock states. Firstly, the observation equation of clock offset estimation by inserting the clock offset velocity and acceleration terms is constructed based on the fixed ultra-rapid orbit and station positions. Secondly, the algorithms of time-difference carrier phase (TDCP) and singular decomposition are used to construct the quality control and epoch-transition model of clock offset estimation. Thirdly, with the introduction of clock parameter states transition equation and consideration of BDS-3/GNSS onboard clock frequency stability, the single-epoch estimated clock offset can be obtained. According to the proposed BDS-3/GNSS ultra-rapid clock offset estimation method, an accuracy improvement at least 46.9% can be obtained, where the instantaneous clock offset of epoch interval can be directly derived using the clock state values. In addition, compared with the traditionally issued BDS-3/GNSS ultra-rapid clock offsets, the four-system kinematics PPP performances of positioning and convergence are improved with 1.7%, 6.0%, 31.2% and 44.9%, 33.3%, 38.9% for E, N and U directions, respectively. Meanwhile, the accuracy of the predicted clock offset is also improved, which can lead to the decreasing of four-system static PPP positioning residuals and convergence time with 6.3%, 13.5%, 11.3% and 14.5%, 1.6%, 12.4% for E, N and U directions, respectively. Therefore, the quality of BDS-3/GNSS satellite ultra-rapid clock offset is significantly improved by the proposed estimation method, which can also be used to improve the multi-GNSS constellation performances of real-time and near-real-time services.

    A non-uniform discretization GNSS water vapor tomography refined method considering water vapor distributions
    Wenyuan ZHANG, Mingxin QI, Shubi ZHANG
    2024, 53(12):  2282-2294.  doi:10.11947/j.AGCS.2024.20220534
    Asbtract ( )   HTML ( )   PDF (5844KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    GNSS water vapor tomography technique has become a crucial tool for retrieving atmospheric water vapor distributions with high spatiotemporal resolution, owing to its high precision and all-weather availability. The existing GNSS water vapor tomography method divides the three-dimensional (3D) tomography area with a uniform discretization scheme. However, due to the spatial heterogeneity of atmospheric water vapor, this method does not follow the actual distribution of atmospheric water vapor in the vertical direction. Based on the vertical decreasing tendency of atmospheric water vapor, an improved non-uniform discretized GNSS water vapor tomography method that considers water vapor distributions is proposed. The method analyzes the vertical decreasing characteristics of water vapor content and constructs a vertically non-uniform stratification scheme based on the change rate of precipitable water vapor. Furthermore, a horizontal non-uniform discretization scheme at different altitude layers is set up, forming an uneven discretization tomography framework with the decreasing resolution voxels from the surface to the top of the tomography area. Experiments are conducted using actual GNSS measurements, radiosonde data and ERA5 reanalysis in the Hong Kong region in July 2017. Taking radiosonde water vapor profiles as reference, the root mean square errors (RMSE) of the tomography results obtained from the non-uniform discretization approach are reduced by 21.8%, 20.9%, and 20.5% against three traditional schemes, respectively. Compared with ERA5 data, the RMSE values of the proposed method's tomography results are reduced by 15.4%, 11.4%, and 12.6%, respectively. Additionally, in the near-surface tomographic region below 2 km, the accuracy of the tomographic results obtained by the proposed method is significantly superior to that of the traditional method, which highlights that the proposed method is expected to provide higher accuracy and higher resolution near-surface 3D atmospheric water vapor products for rainfall forecasting.

    Ground-based GNSS-IR ice period detection considering residual signal-to-noise ratio characteristics
    Minfeng SONG, Xiufeng HE, Xiaolei WANG
    2024, 53(12):  2295-2304.  doi:10.11947/j.AGCS.2024.20230338
    Asbtract ( )   HTML ( )   PDF (3514KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The GNSS interferometry reflectometry (GNSS-IR) is a promising technique for retrieving land and ocean surface parameters due to its cost-effectiveness and high sampling resolution. Despite its potential, GNSS-IR's application in ice detection during freezing periods has been largely unexplored, with existing methods hindered by surface property and signal variation effects. This paper addresses these challenges by examining the differences in reflected signals from ice and water through modeling and simulation based on dielectric constants and surface roughness. We introduce a novel ice detection method using the power factor parameter, derived from the envelope integration of residual signal-to-noise ratio (SNR). Validation experiments using data from the Shuangwangcheng Reservoir Dam GNSS station show that the proposed method is sensitive to surface dielectric properties, roughness, frequency, and ice thickness. The power factor method demonstrates effectiveness and robustness across BDS and GPS data for all frequency bands, offering a reliable approach for ice detection that enhances GNSS reflectometry technology's monitoring capabilities.

    Analysis of orbit determination for inclined geosynchronous SAR using spaceborne GNSS data
    Peng YANG, Zhenxing WANG, Xiaobin TIAN, Shigui ZHENG, Yong HUANG
    2024, 53(12):  2305-2315.  doi:10.11947/j.AGCS.2024.20230176
    Asbtract ( )   HTML ( )   PDF (4177KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    With the widespread application of high orbit satellites, the demand for their orbital accuracy is becoming increasingly high. Based on the spaceborne GNSS (global navigation satellite system) data, we simulate an inclined geosynchronous SAR satellite, and analyze the orbit determination accuracy. The results show that the high sensitivity onboard receiver can obtain the signal of about 20 satellites which include GPS and BDS, with PDOP (position dilution of precision) values ranging from 1 to 2. By using the pseudo range/carrier phase measurement values of spaceborne GNSS data, the orbit determination accuracy of inclined geosynchronous SAR satellites can be better than 1 m, and the fluctuation of high-order Legendre error is less than 10 mm. The perturbation of solar radiation pressure has a significant impact on the orbit, the model error of the 10-8 m·s-2 order can reduce the orbit determination accuracy by one order of magnitude.

    Photogrammetry and Remote Sensing
    An algorithm for building extraction from airborne LiDAR data under adaptive local spatial-spectral consistency
    Liying WANG, Kangli ZHANG, Xinao LI, Ze YOU, Yong FENG
    2024, 53(12):  2349-2360.  doi:10.11947/j.AGCS.2024.20230517
    Asbtract ( )   HTML ( )   PDF (6620KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    All the existing studies use the global statistical characteristics of laser reflection intensity of buildings to aid the extraction of buildings from airborne LiDAR data, but this solution cannot meet the needs of comprehensive and accurate extraction of buildings with different spectra in large-scale complex urban scenes. Therefore, a building extraction method from airborne LiDAR data based on adaptive local spatial-spectral consistency is developed. The proposed method first converts raw airborne LiDAR data into 3D image. Then, the seeds are selected according to the characteristics of building elevation jump and edge approaching straight line. Subsequently, the connected components that are spatially and spectrally consistent with the seedsare labeled as the building roof, in which the spectral consistency is given by the statistical intensity properties of an individual building. Finally, the building facade is extracted by combining the spatial constraint of the extracted building roof and the local intensity consistency constraints. This method solves the problem of accurate extraction of buildings that do not conform to the global statistical characteristic of the spectrum by self-adapting to the local spectrum of each individual building, improves the use value of point cloud spectral information, and thus broadens the application scenarios of point cloud spectral information. Three airborne LiDAR datasets of urban scene with different complexities provided by International Association for Photogrammetry and Remote Sensing are used to test the feasibility and effectiveness of the proposed method. The experimental results show that the proposed method can excellently extract buildings in scenes with different complexities. The average completeness, accuracy and quality of the building extraction results are 99.0%, 98.0% and 96.8%, respectively, which are obviously better than the traditional building extraction method using the global statistical properties of the spectrum.

    Cropland intensity extraction combined using optical and SAR time-series in cloudy and rainy areas of southern China
    Qihao CHEN, Guangchao LI, Wenjing CAO, Xiuguo LIU
    2024, 53(12):  2361-2374.  doi:10.11947/j.AGCS.2024.20230497
    Asbtract ( )   HTML ( )   PDF (11330KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Timely and accurate acquisition of spatiotemporal distribution information regarding cropping intensity holds significant reference value for adjusting agricultural production layout and making grain production decisions. Current research on cropping intensity extraction primarily relies on optical data and phenological knowledge. However, critical phenological parameters for multi-season cropping cropland are often missing in cloudy and rainy regions of the South China, and confusable vegetation with phenological characteristics similar to crops is difficult to cull, and the salt-and-pepper noise is obvious in the pixel-level results. This paper introduces a novel method for extracting cropping intensity by integrating optical phenological parameters, SAR temporal features, and superpixel optimization based on time-series optical and SAR data. Initially, optical NDVI and LSWI temporal curves are utilized to acquire the number and duration of growth periods. Subsequently, SAR temporal features are employed to identify early-season signals of transplanting and irrigation. Finally, spatial contextual information is utilized for superpixel optimization of the cropping intensity extraction results. The effectiveness of the proposed method is validated using time-series Sentinel-1/2 data from Honghu city in 2020—2021, yielding an overall accuracy of 92.02% and a Kappa coefficient of 0.84. Results indicate that incorporating growth period duration effectively mitigates the influence of mixed vegetation, SAR temporal features accurately classify double-season rice, and superpixel optimization enhances the accuracy and completeness of planting intensity results. This method proves capable of accurately capturing cropping intensity distribution in regions with cloudy and rainy complex cropping pattern.

    Flood change detection method using optimized similarity measurement function with temporal-spatial-polarized SAR information
    Jinqi ZHAO, Yuxuan LI, Zirong LIU, Qing AN, Shiyu SONG, Yufen NIU
    2024, 53(12):  2375-2390.  doi:10.11947/j.AGCS.2024.20230355
    Asbtract ( )   HTML ( )   PDF (9377KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Thanks to its ability for all-weather and all-day observation, synthetic aperture radar (SAR) enables flood monitoring in harsh environments. Currently, flood change detection methods are easily affected by the changes of other ground objects and designed inadequacy for SAR data characteristics. To solve these problems, a novel change detection method using temporal characteristics and flood characteristic distribution is proposed. The proposed method integrates multi-temporal and multi-polarized information to construct temporal-spatial-polarized SAR data. Furthermore, the improved K-means clustering approach for constructed data reduces accumulated errors from different temporal clustering processing. In addition, considering the distribution characteristics of temporal-spatial-polarized SAR data, Cross Entropy is designed to optimize the similarity measurement function to accurately distinguish water body changes caused by flooding. Finally, multi-temporal fully polarimetric Radarsat-2 data from Wuhan and dual polarimetric Sentinel-1 data from Huangmei County in Huanggang are used to validate the effectiveness of the proposed method. The false alarm rate (FA), total errors rate (TE), overall accuracy (OA), and Kappa of our method in Wuhan applied are 5.06%, 5.66%, 94.34%, 0.69 and 1.61%, 2.61%, 97.39%, 0.65, which highlight the advantages of the proposed method. The TE, OA and Kappa of experimental results in Huangmei County have the best performance, which are 1.67%, 98.33% and 0.73. Our method effectively mitigates the effect of changes in other land features on the detection of changes in water bodies. Furthermore, our method not only effectively reduces the impact of other land cover changes but also boasts a swift response capability. It can effectively suppress the influence of urban changes and mountain shadow in flood detection.

    Deep learning based multi-view dense matching with joint depth and surface normal estimation
    Jin LIU, Shunping JI
    2024, 53(12):  2391-2403.  doi:10.11947/j.AGCS.2024.20230579
    Asbtract ( )   HTML ( )   PDF (7040KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In recent years, deep learning-based multi-view stereo matching methods have demonstrated significant potential in 3D reconstruction tasks. However, they still exhibit limitations in recovering fine geometric details of scenes. In some traditional multi-view stereo matching methods, surface normal often serves as a crucial geometric constraint to assist in finer depth inference. Nevertheless, the surface normal information, which encapsulates the geometric information of the scene, has not been fully utilized in modern learning-based methods. This paper introduces a deep learning-based joint depth and surface normal estimation method for multi-view dense matching and 3D scene reconstruction task. The proposed method employs a multi-stage pyramid structure to simultaneously infer depth and surface normal from multi-view images and promote their joint optimization. It consists of a feature extraction module, a normal-assisted depth estimation module, a depth-assisted normal estimation module, and a depth-normal joint optimization module. Specifically, the depth estimation module constructs a geometry-aware cost volume by integrating surface normal information for fine depth estimation. The normal estimation module utilizes depth constraints to build a local cost volume for inferring fine-grained normal maps. The joint optimization module further enhances the geometric consistency between depth and normal estimation. Experimental results on the WHU-OMVS dataset demonstrate that the proposed method performs exceptionally well in both depth and surface normal estimation, outperforming existing methods. Furthermore, the 3D reconstruction results on two different datasets indicate that the proposed method effectively recovers the geometric structures of both local high-curvature areas and global planar regions, contributing to well-structured and high-quality 3D scene models.

    Hyperspectral remote sensing image scene classification method based on deep manifold distillation network
    Quanyi ZHAO, Fujian ZHENG, Bo XIA, Zhengying LI, Hong HUANG
    2024, 53(12):  2404-2415.  doi:10.11947/j.AGCS.2024.20230373
    Asbtract ( )   HTML ( )   PDF (3330KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Most of the current scene classification tasks are based on high-resolution remote sensing images, and the lack of spectral information limits its discrimination ability for scene classification. While hyperspectral remote sensing images have the characteristic of “spatial-spectral integration”, which has unique advantages in scene classification. To address the issue of the complex landcover distribution and the high dimension and redundancy in hyperspectral images, this paper proposes a hyperspectral scene classification manifold distillation network (HSCMDNet) to improve the performance of hyperspectral remote sensing scene classification. For the complex landcover distribution of remote sensing images, HSCMDNet employs Swin Transformer as a teacher network to reveal the long-distance dependency information of hyperspectral images and capture the relationship between different bands. After that, a manifold distillation loss is designed in the middle layer of the teacher network and the student network ResNet-18. By matching the middle layer output features of the student model and the teacher model in the manifold space, the knowledge of the teacher model is transferred to the lightweight student model effectively, which alleviates the high computational complexity caused by high-dimensional hyperspectral data. In the orbita hyperspectral image scene classification dataset (OHID-SC) and natural scene classification with Tiangong-2 remotely sensed imagery (NaSC-TG2), the best classification accuracies of the proposed HSCMDNet network reached 93.60% and 94.55%, respectively.

    Summary of PhD Thesis
    Rainstorm spatio-temporal process prediction via prior gated convolutional network
    Jie LIU
    2024, 53(12):  2416-2416.  doi:10.11947/j.AGCS.2024.20230407
    Asbtract ( )   HTML ( )   PDF (865KB) ( )  
    References | Related Articles | Metrics
    Study on error analysis and fusion-based improvement of Chinese Fengyun-based satellite precipitation retrieval
    Hao WU
    2024, 53(12):  2417-2417.  doi:10.11947/j.AGCS.2024.20230409
    Asbtract ( )   HTML ( )   PDF (866KB) ( )  
    References | Related Articles | Metrics
    Research on deep learning for aerial imagery artificial object detection
    Nan MO
    2024, 53(12):  2418-2418.  doi:10.11947/j.AGCS.2024.20230423
    Asbtract ( )   HTML ( )   PDF (857KB) ( )  
    References | Related Articles | Metrics
    The algorithm research of BDS augmentation positioning based on the un-difference corrections
    Jun LI
    2024, 53(12):  2419-2419.  doi:10.11947/j.AGCS.2024.20230424
    Asbtract ( )   HTML ( )   PDF (859KB) ( )  
    References | Related Articles | Metrics
    Multi-features based hierarchical detection of transmission towers with high-resolution SAR images
    Jianan LI
    2024, 53(12):  2420-2420.  doi:10.11947/j.AGCS.2024.20230450
    Asbtract ( )   HTML ( )   PDF (861KB) ( )  
    References | Related Articles | Metrics
    Research on key techniques and applications of space object cataloging and orbit determination based on radar data
    Lei LIU
    2024, 53(12):  2421-2421.  doi:10.11947/j.AGCS.2024.20230452
    Asbtract ( )   HTML ( )   PDF (865KB) ( )  
    References | Related Articles | Metrics