Loading...

Table of Content

    10 October 2025, Volume 54 Issue 9
    Review
    Satellite gravity technology oriented towards data-scenario-model driven approach: developments, challenges and outlook
    Jiancheng LI, Yunlong WU, Yibing YAO, Zhicai LUO
    2025, 54(9):  1537-1560.  doi:10.11947/j.AGCS.2025.20250274
    Asbtract ( )   HTML ( )   PDF (6221KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Satellite gravimetry, as a major breakthrough in modern geodesy, has demonstrated strong capabilities in capturing mass variations in the Earth's surface and subsurface layers. It has been widely applied in critical fields such as geodetic surveying, hydrological cycle monitoring, glacier mass balance, sea level change, and tectonic deformation. This study systematically reviews the evolution of gravity satellite missions from CHAMP and GRACE to GRACE-FO and Chinese gravity satellite programs, with a particular focus on next-generation satellite gravimetry missions and emerging trends in quantum-based gravity satellite concepts. Based on this, the study comprehensively summarizes the data processing pipeline from Level-0 to Level-3, key inversion methodologies, and science product development. Application cases are presented across hydrology, cryosphere, oceanography, seismology, and geoid refinement. Furthermore, major challenges in China's current gravimetry application system are identified, including data quality limitations, multi-source signal separation, lack of interpretability in AI-based models, and barriers to interdisciplinary integration. Finally, the study calls for synergistic innovation driven by “data-scenario-model” integration to support multi-satellite networks and high-precision modeling in service of national strategic needs and global sustainable development.

    Geodesy and Navigation
    Theoretical foundation of gravity field and improvement of classical concepts for geodetic height datum unified in the terrestrial reference system
    Chuanyin ZHANG, Tao JIANG, Baogui KE
    2025, 54(9):  1561-1571.  doi:10.11947/j.AGCS.2025.20250102
    Asbtract ( )   HTML ( )   PDF (1832KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The current theory of geodetic height datum were mainly established during the era of traditional terrestrial geodesy, and have difficulty adapting to the rapid development of the Earth's gravity field and satellite geodesy. This paper strictly follows the principles of geometric and physical geodesy and the uniqueness and precise measurability requirements of geodetic elements and concepts, and deduces the theoretical and logical relationship among the height datum, the terrestrial reference system and the gravity field concisely and clearly by conducting scientific research on the theoretical foundations and implementation principles necessary for unifying the elements of physical geodesy into the terrestrial reference system, and then re-examines some classical concepts of the height datum. The paper presents the following main results and their specific geodetic evidences. ①It is demonstrated that whether it is the orthometric height, normal height or geopotential number system, the height starting datum surface is the geoid if the deformation of the geoid is ignored, and it is pointed out that the analytical orthometric height is more suitable for the purpose of the height datum than other types of orthometric heights. ②The theoretical foundation of the gravity field for the geodetic height datum unified in the terrestrial reference system is improved, and the geodetic datum conditions and technical implementation principles for the GNSS replacing leveling technology are derived. ③The theoretical method of Earth's center of mass and shape polar positioning based on space geometric and physical geodesy is derived. Which neither relies on geophysical assumptions or geodynamic protocols, nor on the principle of earth rotation and its dynamics, but rather realizes scientifically the positioning and orientation of the terrestrial reference system only based on the theory of geodesy. ④It is demonstrated that the surfaces of orthometric equi-height are parallel to the geoid and the normal gravity field can be fully determined with only three parameters. Thus the trouble of coordination and consistency between the geoid defined by the Gaussian convention and the gravity geoid has been effectively solved.

    A high-degree gravitational potential and gradient calculation method without singularities
    Zhen LI, Zhenghang HE, Chuang SHI
    2025, 54(9):  1572-1582.  doi:10.11947/j.AGCS.2025.20250181
    Asbtract ( )   HTML ( )   PDF (2909KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The spherical harmonic model of the Earth's gravitational field holds significant application value in areas such as precise orbit determination for very low earth orbit (VLEO) satellites and high-precision inertial navigation systems (INS). The Cunningham recurrence algorithm is a Cartesian coordinate-based spherical harmonic recurrence method capable of singularity-free computation of gravitational potential, acceleration, and gradients to any degree globally. It is primarily applied for gravitational calculations in satellite dynamic orbit determination. However, as the degree of gravitational field models continues to increase, the recurrence relations in this algorithm suffer from numerical overflow issues due to factorial terms. This study introduces a novel scaling factor to optimize the recurrence relations, thereby controlling the growth of factorial terms within the recurrence functions and mitigating the numerical overflow problem. The improved algorithm, implemented in Cartesian coordinates using double-precision floating-point arithmetic, enables computation up to degree 1000 without generating numerical overflow. Compared to existing mainstream spherical harmonic recurrence methods, the computational efficiency for single gravitational potential and acceleration calculations is increased by 16.8% and 8.0%, respectively.

    High accuracy vertical gradient of gravity anomaly model determined from SWOT/KaRIn altimetry data during scientific phase
    Shaoshuai YA, Xin LIU, Ruichen ZHOU, Zhen LI, Shaofeng BIAN, Jinyun GUO
    2025, 54(9):  1583-1595.  doi:10.11947/j.AGCS.2025.20240520
    Asbtract ( )   HTML ( )   PDF (5044KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Satellite altimetry is a crucial technique for determining the marine vertical gradient of gravity anomaly. One-dimensional altimetry data suffer from large sampling intervals, sparse across-track data, and low precision. In comparison, the surface water and ocean topography (SWOT) altimetry satellite provides two-dimensional wide-range ocean information, achieving higher spatial resolution and accuracy in sea surface height measurements. Therefore, this paper presents a vertical gradient of gravity anomaly (SWOT_VGGA) model based on SWOT sea surface height data from cycles 1 to 20. The remove-restore technique was employed as the processing strategy, selecting XGM2019e_2159 as the reference gravity field model. The deflection of vertical was computed by combining along-track, cross-track, and diagonal-track altimetry data through least squares collocation. Based on the deflection of the vertical and the reference gravity field, the SWOT_VGGA model was obtained. This study focused on the Philippine Sea, with the vertical gravity gradient anomaly from SIO V32.1 employed as the reference. The results show an 8.25 E consistency between the SWOT_VGGA and SIO_curv_32.1 models, confirming the reliability of SWOT-derived vertical gradient of gravity anomaly from one year of SWOT altimetry data. Additionally, the model consistency was assessed across varying water depths, distances, and seafloor slope angles. Furthermore, the multi-cycle model exhibited better consistency than the single-cycle model. Specifically, the discrepancy between its 1~10 and 11~20 cycle subsets was only 1.81 E. This demonstrates that the data quality of the SWOT satellite exhibits stability across different cycles, making it suitable for inverting high-precision marine gravity information.

    Kalman filter-based satellite clock bias prediction algorithm with frequency difference estimation correction
    Cong SHEN, Guocheng WANG, Lintao LIU, Huiwen HU, Zhiwu CAI
    2025, 54(9):  1596-1607.  doi:10.11947/j.AGCS.2025.20250055
    Asbtract ( )   HTML ( )   PDF (3437KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Satellite clock bias prediction is of great significance for time synchronization, real-time positioning, and autonomous navigation, and its accuracy directly affects the service quality of navigation systems. The traditional Kalman filter model (KFM) is widely used in clock bias prediction because of its minimum variance estimation characteristics, which enable it to obtain optimal estimates of time difference, frequency difference, and frequency drift. However, KFM does not explicitly model the periodic terms in clock bias, resulting in periodic fluctuations in the estimated values of time difference, frequency difference, and frequency drift. This periodic estimation bias increases the prediction error of KFM and further amplifies it over time. To address this issue, this paper proposes an improved Kalman filter model (IKFM) based on frequency difference estimation correction. This model first identifies the periodic terms in the frequency difference estimates through spectral analysis and fits their parameters using least squares. Then, periodic fluctuations are subtracted from the estimated values to eliminate the interference of periodic terms on the state estimation. Finally, the time difference is extrapolated based on the corrected frequency differences. The experimental results based on GPS clock bias data show that, compared with KFM, IKFM reduced the error in 1~24 h predictions by up to 32.14%; and compared with the gray model, quadratic polynomial model, and spectral analysis model, IKFM showed the best accuracy and stability for all prediction durations. By effectively suppressing periodic term interference, IKFM provides a reliable solution for high-precision satellite clock bias prediction, especially for spaceborne atomic clocks with significant periodic fluctuations.

    A strategy for selecting quasi-stable points with a high breakdown point by integrating robust S-transform with K-means clustering
    Zhonghe LIU, Zongchun LI, Hua HE, Yinggang GUO, Wenbin ZHAO
    2025, 54(9):  1608-1619.  doi:10.11947/j.AGCS.2025.20240323
    Asbtract ( )   HTML ( )   PDF (1600KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The reasonable selection of quasi-stable points is a key procedure for stability analysis of deformation monitoring networks. In the case of a large number or even more than half of the deformation points, the existing methods have insufficient robustness in selecting quasi-stable points, leading to unreasonable selection results. To improve the correctness of selecting quasi-stable points, a high breakdown point strategy that integrates the robust S-transform model with the K-means clustering algorithm is proposed. First, the residuals of control points are estimated, using the robust S-transform model, from random subsets of homologous points drawn from two-epoch deformation monitoring networks. Then, based on the residuals of points, the K-means clustering method is used to divide the points into quasi-stable, micro-deformation and macro-deformation classes, and the feasibility of the robust S-transform model is assessed via the centroid separation between quasi-stable and macro-deformation clusters. If the robust S-transform model is valid, the points exhibiting minimal residuals are identified as quasi-stable, and the candidate quasi-stable points are selected from the quasi-stable points with high frequency. Finally, the reliable point transformation residuals are acquired by the robust S-transform model from the candidate quasi-stable points, which are used to reasonably determine the quasi-stable points via the K-means clustering algorithm. Through the simulation experiments and a case analysis, the comparisons were made with the traditional similarity transformation model, the iterative weighted similarity transformation model, the similarity transformation model combined with RANSAC algorithm, and the squared Msplit similarity transformation model. The results show that when deformation exists in the network, the proposed method has the highest correctness in identifying the deformation points, and the estimated displacement of control points closely matches the actual situation. In the case where the number of the deformation points exceeds the number of the stable points, the proposed method can still maintain its robustness, which means it has a high breakdown point.

    An improved Butterworth gravity downward continuation method driven by entropy-PSO dual optimization
    Han WANG, Yun XIAO, Huaikui GUAN, Weixuan SUN
    2025, 54(9):  1620-1632.  doi:10.11947/j.AGCS.2025.20250137
    Asbtract ( )   HTML ( )   PDF (11135KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In response to the challenges of high-frequency noise amplification and instability in downward continuation solutions influenced by gravity, this paper proposes an intelligent continuation method that integrates iterative Butterworth filtering with particle swarm optimization (PSO). The method introduces an improved Butterworth function to reconstruct the downward continuation operator and employs an iterative compensation mechanism to correct spectrum residuals. A particle swarm dual-parameter collaborative optimization method based on information entropy theory is designed to achieve adaptive optimization of the filter's cutoff frequency and order. Furthermore, a fitness feedback-driven dynamic adjustment strategy for inertia weight is proposed to balance global exploration capability and the efficiency of searching for local precise solutions. Experimental results demonstrate that compared to the hybrid domain iterative Tikhonov regularization method and improved derivative iterative method, the proposed method maintains good stability in continuation accuracy under varying noise levels, providing an effective solution for processing marine and aerial gravity data.

    Photogrammetry and Remote Sensing
    An intelligent 3D reconstruction framework via deep learning based multi-view image matching
    Shunping JI, Jin LIU, Jian GAO, Jianya GONG
    2025, 54(9):  1633-1646.  doi:10.11947/j.AGCS.2025.20230306
    Asbtract ( )   HTML ( )   PDF (11524KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The real-scene 3D model reconstruction of the ground surface based on high-resolution stereo or multi-view images is a key research topic in photogrammetry and computer vision, with dense image matching being the core technologies. At present, the mainstream 3D reconstruction algorithms are still based on the manual-designed methods. Although deep learning-based dense matching algorithms have shown excellent performance in recent years, they have not yet been deployed in 3D reconstruction projects, and there are few reports on the deployment of 3D reconstruction frameworks or software based on deep learning or intelligent methods, both domestically and internationally. To promote the application of modern artificial intelligence methods in large-scale 3D surface reconstruction task, this article proposes a general intelligent framework for real-scene 3D reconstruction called Deep3D, with the core component being a deep learning dense matching network. This framework includes complete processes of aerial triangulation, optimal view selection, deep learning-based dense matching, depth map fusion, and 3D surface model reconstruction, aiming for urban-level real-scene 3D surface reconstruction from multi-view remote sensing images. This general framework integrates the processing of aerial and satellite images by incorporating the perspective model and the rational polynomial coefficient model into the network, as well as the processing of binocular, multi-view and oblique view images by using adaptive multi-view alignment and aggregation strategies. This paper compares the Deep3D framework, software and open source solutions on two sets of oblique aerial images, and confirms that the proposed Deep3D framework performs essentially on par with or slightly better than software, far better than existing open source frameworks. This article also discusses the performance on satellite multi-view images of different methods. This study provides an outlook and reference for the application of deep learning methods in the real-scene 3D reconstruction projects.

    Singular value decomposition normalization prediction method for non-steady landslide displacement
    Wei QU, Rongtang XU, Jiuyuan LI, Xingyou TANG, Peinan CHEN
    2025, 54(9):  1647-1663.  doi:10.11947/j.AGCS.2025.20240463
    Asbtract ( )   HTML ( )   PDF (4666KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The reasonable establishment of high-precision landslide displacement prediction model has important reference value for landslide disaster prevention and early warning. In this study, a simple normalization method based on singular value decomposition is developed for the current data-driven landslide displacement prediction model, which has a strong dependence on the amount of data and limitations in dealing with the distributional drift characteristics of non-stationary landslide displacement monitoring data. This method can effectively solve the distribution drift problem of non-stationary landslide displacement data by segmentally normalizing the landslide displacement monitoring data and then combining the statistical characteristics of the extrapolation model for the inverse normalization process, and does not need to rely on large-scale data training, which can significantly improve the prediction ability of the prediction model for non-stationary landslide displacement. Tests with measured data of Heifangtai landslide in Gansu, a typical landslide domain in China, show that compared with the traditional z-score normalization method and no normalization, the method developed in this study can significantly improve the prediction accuracy of multi-class models, such as (multi-layer perceptron MLP), (long short-term memory LSTM), (gated recurrent unit GRU), and (temporal convolutional network TCN), and the average enhancement rate of (root mean square error RMSE) and (mean absolute error MAE) is more than 50%. The method in this study can significantly improve the stability of the model training process, effectively predict the sudden change of landslide displacement, and has a high value of practical popularization and application.

    Earth surface anomaly detection based on lightweight large vision model features in remotely sensed imagery
    Kai YAN, Jianming XU, Qiao WANG
    2025, 54(9):  1664-1676.  doi:10.11947/j.AGCS.2025.20250092
    Asbtract ( )   HTML ( )   PDF (12300KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Earth surface anomaly detection (ESAD) has become increasingly vital due to intensifying global change and urbanization, leading to more frequent and severe disasters, pollution, and illegal development. Although remote sensing enables wide-range and periodic ESAD, efficiency remains a concern due to lengthy data transmission, dissemination, and processing procedures. Existing methods are primarily task-specific, relying on expert knowledge and human involvement, hindering generalized and automated deployment. In this context, we introduce a novel method aimed at inherently timely on-orbit detection. This approach is not only characterized by its generalizability and automation capabilities, but lightweight parameters that facilitate ground-satellite data transmission for updates. Our approach comprises three main processes: ①employing large vision models as feature extractors to enhance algorithm universality, with automatic feature compression achieved through Gaussian mixture models and Bayesian information criteria, generating lightweight prior knowledge suitable for ground-satellite transmission and on-orbit storage; ②utilizing an efficient dictionary lookup method for rapid inference of surface anomaly scores, making it applicable to satellite-based environments with limited computational resources; ③extracting surface anomaly boundaries based on anomaly scores using prompt words and deep segmentation models. This generates accurate anomaly boundaries with a threshold-insensitive method, reducing human involvement and promoting automation. Experiments demonstrate that the proposed method outperforms traditional approaches, offering better stability and generalization. In experimental cases, the average compression rate of the prior knowledge base was approximately 100 times, significantly improving the ability for on-orbit storage and ground-satellite data transmission updates. Furthermore, the method largely mitigates the issues of low automation and poor noise resistance associated with fixed thresholds for anomaly boundary extraction, achieving automated object-level surface anomaly extraction. Overall, the proposed ESAD method offers advantages such as small data storage, low computational requirements, and high detection accuracy. These features highlight its potential to become a generalized, on-orbit, real-time ESAD for operational deployment.

    Robust multi-sensor fusion-based odometry method of LiDAR, millimeter-wave radar and IMU in degraded scenes
    Weitong WU, Chi CHEN, Bisheng YANG, Xiufeng HE
    2025, 54(9):  1677-1686.  doi:10.11947/j.AGCS.2025.20240497
    Asbtract ( )   HTML ( )   PDF (4989KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Multi-sensor fusion-based simultaneous localization and mapping (SLAM) is crucial for robust localization and accurate mapping of unmanned systems in degraded environments. In complex environments such as underground and indoors, achieving robust SLAM solely with LiDAR is challenging due to perception limitations caused by insufficient geometric feature constraints and the presence of smoke and dust. Furthermore, existing asynchronous multi-sensor measurement update strategies based on filtering frameworks often compromise system accuracy. To address these challenges, this paper proposes a robust fusion odometry method for degraded scenarios, integrating LiDAR, millimeter-wave radar, and inertial sensors. This method is built upon an iterative error state Kalman filter framework that fuses multi-source data, specifically integrated measurements from the inertial measurement unit, LiDAR point-to-plane matching observations, and velocity estimations from the millimeter-wave radar. To mitigate the degradation in LiDAR localization, the radar velocity measurement is employed to enhance the forward direction constraint, while truncated singular value decomposition reduces the impact of degraded data on system updates, thereby improving the accuracy of asynchronous sensor fusion. This method was validated on multiple degraded scenarios, specifically tunnel and fog-affected corridor environments, using their respective datasets. Results indicated that the FAST-LIO2 method experienced significant drift and nearly failed in degraded areas. In comparison to the FAST-LIO2 method, the millimeter-wave radar inertial odometry method, and the proposed method (direct fusion), the proposed method demonstrated superior robustness and accuracy. Notably, in the corridor data, the ratio of the closure error to trajectory length for the proposed method was 0.9%, an order of magnitude better than the proposed method (direct fusion) and 80% more effective than the millimeter-wave radar-inertial odometry approach. Additionally, in a highway tunnel data of approximately 1 kilometer, the root mean square error of the trajectory for the proposed method was 4.57 m, representing a 4.4% improvement over the FAST-LIO2 method.

    Quantitative analysis method for the time lag effect of rainfall-reservoir water level-landslide deformation
    Feifei TANG, Junzhe ZHOU, Changhan WANG, Jianyun WANG, Yutao ZHOU, Yafei HAO
    2025, 54(9):  1687-1696.  doi:10.11947/j.AGCS.2025.20250147
    Asbtract ( )   HTML ( )   PDF (4701KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the challenge of accurately and quantitatively determining the time lag effect of reservoir bank landslide deformation under the combined influence of rainfall and reservoir water level fluctuations, this paper proposes a quantitative analysis method (MIC-SPA) integrating the maximum information coefficient (MIC) and set pair analysis (SPA). The MIC method is utilized to objectively quantify the contribution weights of rainfall and reservoir water level to landslide deformation. Simultaneously, based on SPA, the “identity-discrepancy-contrary” connection degree between rainfall-reservoir water level trends and landslide deformation is evaluated. By incorporating the weight coefficients and connection degree coefficients into a linear regression formula, a lagged regression equation is established, thereby achieving quantitative analysis of the time lag effect. The proposed method is applied to a case study of a landslide in the Three Gorges Reservoir area. Results show that the contribution weights of reservoir water level and rainfall to landslide deformation are 0.537 and 0.463, respectively. Furthermore, analysis reveals that during rapid reservoir drawdown (>0.6 m/d), the lag periods for deformation at the landslide's front, middle, and rear sections are 5~6 days, 2~3 days, and 1 day, respectively, achieving precise zonal time lag analysis. Validation through comparison with the acceleration deformation phases identified by the modified tangent angle method demonstrates that the lag durations derived from this method align with the acceleration dates of deformation. The proposed approach provides scientific guidance for landslide hazard prediction.

    Cartography and Geoinformation
    An automatic river classification and selection method supported by random forest and graph neural network
    Fubing ZHANG, Qun SUN, Qing XU, Jingzhen MA, Wenjun HUANG, Ruoxu CHEN
    2025, 54(9):  1697-1711.  doi:10.11947/j.AGCS.2025.20240385
    Asbtract ( )   HTML ( )   PDF (6801KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Nowadays, with the widespread use of deep learning methods in the field of cartography, the use of graph neural networks (GNNs) to solve the generalization problems of unstructured vector map data has become a research hotspot. A river network automatic classificationand selection method supported by random forests and GNN is proposed to address the shortcomings of the existing methods, which mainly start from local structures and do not consider the local correlation of adjacent river segments when applying machine learning methods to classify rivers, and only consider the topological relationship of river segments in the process of river network selection. Firstly, incorporating knowledge of river network classification and local standardization of features, the random forest algorithm is used to automatically classify river network. Then, a dual branch GNN selection model is constructed by integrating topological connections and spatial proximity relationships between river segments, and the selected classification of river segments is achieved through supervised learning. Finally, the river network selection results are obtained by adopting a connectivity preservation strategy that considers hierarchical levels. The experimental results show that compared with standardization ways of Min-Max and Z-Score, the automatic classification accuracy of the river network has been improved by 11.42 percentage points and 12.39 percentage points respectively, and the classification effect is better. Compared with existing river network selection methods based on graph neural networks, the selection accuracy has been improved by 2.2 percentage points, and closer to the labeled data.

    Moving video-based detection for roadside illegally parking vehicles
    Kang TANG, Yu SUN, Xiaoyang ZHONG, Jialiang GAO, Chongcheng CHEN
    2025, 54(9):  1712-1726.  doi:10.11947/j.AGCS.2025.20250038
    Asbtract ( )   HTML ( )   PDF (22555KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Roadside parking zones play a significant role in alleviating urban parking pressure. However, with the continuous growth in urban motor vehicle ownership, the supply-demand gap for roadside parking spaces continues to widen, leading to severe illegally parking that significantly impacts traffic efficiency and safety. Existing illegally parking monitoring systems based on fixed-point cameras or sensors suffer from high costs and limited coverage. To address this, this paper proposes a solution for detecting suspected illegally parked vehicles in roadside parking zones using mobile cameras. The solution is developed using embedded devices combined with an improved object detection algorithm (achieving a 3.3% increase in mAP@50). Trained on a custom-built dataset, it enables real-time detection of suspected illegally parked vehicles and effectively supports large-area monitoring tasks for suspected illegal parking. Comparative analysis with simultaneous drone-tracked orthophoto imagery from the same road sections showed that the solution achieves an average precision of 0.87, a recall of 0.88, and a detection speed of 53.96 frames per second (fps), fully validating the feasibility and effectiveness of the proposed method.