Quantum positioning, navigation and timing (quantum PNT) technology is an interdisciplinary field integrating quantum physics, quantum sensing and self-perception navigation, and quantum timing. Quantum PNT sensors represent a crucial development direction for autonomous PNT terminals characterized by covert, continuous, and robust. The definition and the concept as well as the connotation of quantum PNT are given. The relationship with exist PNT systems, including satellite-based PNT, comprehensive PNT system, resilient PNT, and intelligent PNT etc. are discussed. The significance quantum PNT system is described. Also, the current development status and existing problems of quantum PNT are sorted out, and the research content and key technologies of quantum PNT are mainly analyzed. The quantum PNT development directions are divided into supply side and application side. Key development directions for the supply side are to tackle issues such as the quantum PNT integrated principle and uncertainty control. In the supply side, the integration of quantum sensors and comprehensive PNT terminals should be concentrated, especially the development of chip-based quantum PNT sensors and the miniaturization integration of multiple-principle PNT terminals. The manuscript aims to provide a new approach for secure PNT, trusted PNT, and autonomous PNT services.
The ionospheric photometer (IPM) carried on Fengyun-3D satellite is China's first space-based payload with the ability to detect the ionosphere in the far ultraviolet band. It is urgent to conduct in-depth accuracy verification and quality evaluation of the total electron content (TEC) data obtained from its inversion at night. This study focuses on the availability of China's first ionospheric photometer TEC (IPM-TEC). Using the global ionospheric map (GIM) (GIM-TEC) and TEC data inverted based on Continuous Operational Reference System (CORS-TEC) in Europe and China as references, a two-year quantitative evaluation and comparative analysis of IPM-TEC was conducted at both global and regional scales by calculating system bias and data noise. The experimental results show that on a global scale, during the ionospheric calm period, IPM-TEC and GIM-TEC exhibit high consistency in the mid to low latitude region (within 40°N/S), with an overall system deviation of less than 2 TECU (total electron content unit) and data noise of less than 0.3 TECU, indicating good availability of data in this region; at the regional scale, the systematic deviation between Europe and China in the mid low latitude region during the calm period is controlled within 2 TECU, further verifying the high validity of IPM data in the mid low latitude region. Overall, the research results indicate that China's first Ionospheric Photometer has high data availability during ionospheric calm periods and in mid to low latitude regions, providing important technical references for subsequent multi-source fusion modeling and error correction of the ionosphere based on the ionospheric photometer observation data.
Existing research on GNSS water vapor tomography primarily focuses on improving the utilization of satellite observation data, but there is limited study on the optimization of satellite signal data. This leads to the linear approximation of the tomography observation equations for the same set of grid groups, with most elements of the coefficient matrix column vectors being zero, resulting in severe ill-conditioning of the water vapor tomography model. To address this issue, this paper proposes an adaptive optimization method for GNSS satellite signals in water vapor tomography, aiming to solve the problems of numerous zero elements in the design matrix and the ill-conditioning of the tomography model. This method determines the horizontal grid division of the tomography region based on the principle of maximum grid coverage and develops an adaptive optimization approach for satellite signals by combining elevation and azimuth angle thresholds, thereby overcoming the challenge of linear approximation in the observation equations of the water vapor tomography model. Experimental data from 12 GNSS stations and 1 radiosonde station in Hong Kong from May 2 to 7, 2013, were selected for experiment. Compared to existing methods, the proposed approach ensures grid coverage while reducing satellite signal utilization, addressing the issue of ill-conditioning in the design matrix caused by similar satellite signals. Using radiosonde data as the truth values, the proposed method demonstrates superior performance, with the average RMS, MAE, and Bias of the retrieved water vapor density profiles being 1.03, 0.80, and 0.13 g/m3, respectively, outperforming the traditional methods' values of 1.25, 0.97, and 0.10 g/m3. The RMS improvement rate is 20.78%. Additionally, the proposed optimal selection method also shows superior model solving efficiency compared to traditional methods, with an average improvement of 9.51% in computation efficiency.
Regarding the large computational burden of the least square method and the difficulty in constructing and solving the normal equations in constructing spherical harmonic model of earth topography with degree 10 800, we utilized the orthogonality of trigonometric functions to construct a block-diagonal least square adjustment model for determining topography spherical harmonic coefficients, derived the FFT expression for calculating the free term of the normal equations, and established the FFT block-diagonal least square adjustment model. We also derived the FFT harmonic analysis method for computing the spherical harmonic coefficients of topography. The X-number method was introduced for calculating fully normalized associated Legendre functions (fnALFs), enabling the computation of spherical harmonic coefficients with fixed m and varying n, thereby reducing the memory usage of the computer. In simulation experiments, it has been verified that the FFT block diagonal least square method is superior to the FFT harmonic analysis method in terms of computational accuracy. Finally, we constructed a spherical harmonic coefficient model with degree 10 800 named LS_10800. shc using Earth2014_TBI global topography data through the block-diagonal least square method. Experiments show that the LS_10800. shc model achieves a global accuracy of 9.31 m, outperforming the Earth2014. TBI2014. degree10800. bshc model, which has a precision of 10.15 m. In China and its surrounding areas, the accuracy of the LS_10800. shc model reaches 18.79 m, outperforming the Earth2014. TBI2014. degree10800. bshc model, which has an accuracy of 20.69 m. Compared to 2000 GPS points, the LS_10800. shc model achieved an accuracy of 32.85 m, which is superior to the accuracy of 34.38 m achieved by the Earth2014. TBI2014. degree10800. bshc model.
Earth polar motion (PM), a pivotal geodynamic parameter governing deep space exploration and satellite precise orbit determination, necessitates high-precision prediction models that persist as a research focus in space geodesy. To address the issues of accumulated prediction errors caused by inconsistencies between training and application scenarios, as well as the effect of signal noise in long short-term memory (LSTM) neural networks, we propose a short-term PM prediction method with a cascaded LSTM architecture based on singular spectrum analysis (SSA) denoising. The proposed method first employs SSA to eliminate high-frequency noise components from polar motion time series signals. Subsequently, it fully considers the evolving scenario characteristics across different future prediction horizons, and constructs an interconnected cascaded LSTM framework where multiple sub-models are sequentially connected for progressive information transfer. The experimental results based on the EOP 20 C04 dataset spanning 1984 to 2024 demonstrate significant improvements: For 1~10 days short-term predictions, the proposed method achieves mean absolute errors (MAE) of 1.70 mas and 0.93 mas in the X and Y polar motion components, respectively. Compared to recursive LSTM baselines, the proposed model achieves 42.8% and 48.0% improvements, respectively. Furthermore, it outperforms existing SSA-recursive LSTM hybrid benchmarks by 11.0%and 28.5%in MAE reductions. Significantly, the cascaded architecture demonstrates superior predictive capability in 6~10 days forecasts, validating its effectiveness in mitigating error propagation while enhancing mid-to-long-term forecast stability. The prediction results are applied to the transformation between celestial and Earth coordinate systems for satellite orbits, significantly improving the accuracy of coordinate conversion.
With the widespread application of GNSS in safety-critical fields such as aviation and maritime navigation, receiver autonomous integrity monitoring (RAIM) technology is crucial for ensuring navigation reliability. To address the limitations of existing RAIM algorithms, namely insufficient detection capability and low computational efficiency when multiple satellites fail simultaneously, this paper proposes a novel RAIM algorithm for multiple gross error identification based on density-based spatial clustering of applications with noise (DBSCAN). The algorithm first constructs observation samples via parity checks, calculates inter-sample distances to highlight anomalies, and then employs DBSCAN clustering to adaptively identify and isolate multiple gross errors based on data density distribution. Simulations and real-world experiments demonstrate that: ① In simulated shipborne scenarios with 50 m and 100 m pseudorange gross errors across three satellites, the proposed algorithm improves positioning accuracy by approximately 82.8%and 92.1%, and computational efficiency by about 96.2%and 96.1%, respectively, compared to the traditional least squares residuals (LSR) method;② In simulated high-dynamic airborne scenarios, the algorithm's detection rate for gross errors ranging from 5 m to 100 m increases consistently from 52.9%to 100%, while positioning error remains stable;③ Using real data from an IGS station, the algorithm significantly reduces the horizontal and three-dimensional errors from 8.61 and 9.94 m (with LSR RAIM) to 0.77 and 1.08 m;④ In urban vehicular field tests, the algorithm achieves positioning accuracy comparable to the random sample consensus (RANSAC) RAIM algorithm, but with a computational efficiency improvement exceeding 94.7%. The proposed algorithm significantly enhances multiple gross error identification capability while maintaining high computational efficiency, providing an effective solution for high-reliability navigation and positioning in complex environments.
Spaceborne iGNSS-R technology demonstrates promising potential for high spatiotemporal-resolution sea surface change observation, yet related research and performance evaluation remain insufficient. This paper first systematically analyzes waveform characteristics of multi-constellation multi-frequency signals under different modulation schemes, investigates the correlation between waveform quality and environmental parameters, and quantifies signal-to-noise ratio (SNR) variations across different signals. Subsequently, the DER and HALF waveform retracking methods are employed to extract specular reflection delays, while a double delay differential altimetry model calculates sea surface height (SSH), enabling comprehensive evaluation of multi-constellation multi-frequency signal performance in altimetry and ranging. Furthermore, preliminary analysis is conducted on the global average altimetric performance for the COATS mission. Results indicate: BDS B1 and GPS L1/L5 signals exhibit superior SNR and normalized SNR magnitudes;wind speed demonstrates most significant SNR suppression effect within 0~10 m/s range, while increasing incidence angle degrades SNR for high-frequency signals (L1/B1/E1) and GPS L5. GPS L1 (STD 1.46 m), BDS B1 (1.38 m), and Galileo E1 (1.33 m) show significantly better precision compared with corresponding low-frequency signals (L5: 1.84 m, B2: 1.84 m, E5: 1.74 m). The SNR-precision correlation model derived from ranging residual conversion reveals that ranging accuracy improves markedly with SNR enhancement in medium-low SNR regions, while approaching performance bottleneck (1.4~1.5 m) gradually in high SNR regions. Gridded inversion results achieve 99.93%correlation coefficient with DTU 21 model (RMSE 1.156 m). The revisit count-accuracy curve indicates an asymptotic accuracy of 0.60 m when exceeding 2500 revisits.
Aiming at the problem that the current methods fail to fully consider the characteristics of the uneven distribution and large precision differences in crowdsourced bathymetric data, resulting in the low quality of the constructed digital depth model (DDM), a method for constructing a DDM of a strait channel is proposed considering the distribution and precision differences of crowdsourced bathymetric data. Firstly, the influence mechanism of the uneven distribution and large precision differences of the original data on the interpolation of the grid node is analyzed. Then, considering that the uneven data distribution may lead to significant differences in the number of reference points in different directions, a dynamic adjustment mechanism for the number of reference points in eight directions is designed, which takes into account the anisotropy of the original data distribution and aims to avoid the problem of poor robustness of the interpolation method caused by the “directional tilt” in the traditional method. Finally, based on the inverse distance weighted interpolation function, the influences of factors such as the uneven data distribution and large precision differences are further considered in the function. By introducing the data precision factor, distribution factor, and direction factor, the contribution differences of different crowdsourced bathymetric data points to the interpolation of the grid node bathymetry are reconciled to improve the interpolation accuracy of the grid nodes. The experimental results show that: the integrated optimization IDW method proposed in this paper demonstrates superior performance in overall accuracy of DDM construction, adaptability to different seabed topographies, and robustness compared to conventional IDW methods and ordinary Kriging interpolation, which can effectively take into account the characteristics of multi-source depth data and changes in topography. Furthermore, through the effectiveness analysis of different weighting factors, the proposed method is validated to more comprehensively characterize the spatial features and quality differences of multisource depth data, thereby enhancing the accuracy and stability of DDM construction.
Synthetic aperture radar (SAR) image water body segmentation is widely applied in critical fields such as disaster response, water resource management, and environmental monitoring, holding significant practical importance. To address the issue of low segmentation accuracy for water bodies in complex-background SAR images, a dual-encoder adaptive feature fusion network (DEAFFNet) is proposed to achieve accurate water body segmentation. First, the model employs a lightweight residual network and Swin Transformer to construct a dual-encoder architecture, collaboratively extracting local detail information and global contextual information to mitigate the problem of insufficient information representation capability in complex backgrounds. Second, a feature fusion module based on cross-attention and adaptive weight learning is designed. This module utilizes cross-attention for interaction between local and global information and achieves hybrid feature fusion through adaptive weight learning, thereby enhancing the model's perception of water body structures. Then, a multi-scale convolutional pooling module is integrated into the decoder to reinforce multi-scale contextual information, combined with a lightweight content-aware upsampling method to alleviate feature distortion caused by upsampling. Finally, a composite loss function combining focal loss and active contour loss is adopted to strengthen the model's constraints on sample balance and water body boundary information. Water body segmentation experiments conducted on the ALOS PALSAR and Sen-1SAR datasets demonstrate that DEAFFNet outperforms existing methods across multiple evaluation metrics, achieving more accurate water body segmentation.
Addressing the challenges of complex data sources and insufficient model generalization ability in small-sample optical remote sensing anomaly detection, this paper proposes a diffusion feature-constrained small-sample optical remote sensing anomaly detection method. By introducing the noise space modeling capability of the diffusion model, this method enhances the stability and capability of feature learning, and achieves anomaly detection in small-sample scenarios based on the deviation degree of reconstruction error. Using the Wuhan University AID dataset as experimental data, a comparative experiment was conducted between the proposed method and the convolutional autoencoder benchmark method. The experimental results show that the proposed method reduces the spatial entropy mean from 3.65 to 3.51 and the spectral entropy mean from 5.77 to 5.62, with significant improvements in quantitative indicators. Furthermore, the reconstruction results are visually more complete and less noisy. The anomaly detection experiment simulates common image anomalies such as unclear textures and striping noise in national standards. In small-sample scenarios where the proportion of abnormal samples in the training set is 1.5%to 2.5%, subjective visual evaluation and quantitative index analysis indicate that the negative samples are well separable in most land types. This paper verifies the effectiveness of diffusion feature constraints in small-sample anomaly detection, providing ideas for optical remote sensing quality assessment.
Addressing the issues in building polygon simplification within map generalization—such as reliance on manual rules, low automation, and difficulty in reusing existing simplification results—this paper proposes a building polygon simplification model based on the Transformer mechanism. The model begins by mapping building polygons into a grid space of a certain scale, representing coordinate strings of the polygons as grid sequences. This process allows the acquisition of Token sequences before and after simplification, thereby constructing paired datasets of building polygon simplification samples. Utilizing the Transformer architecture, the model learns dependencies between point sequences through its masked self-attention mechanism, ultimately generating new simplified polygons point by point to achieve building polygon simplification. During training, the model employs structured sample data and incorporates a cross-entropy loss function that ignores specific indices to enhance simplification quality. The experimental design consists of two parts: a main experiment and a generalization validation. The main experiment, based on the Los Angeles 1∶2000 building dataset, encodes polygons using three grid sizes—0.2, 0.3, and 0.5 mm—to achieve simplifications at target scales of 1∶5000 and 1∶10 000. Results indicate that the model performs optimally with a grid size of 0.3 mm, achieving a consistency rate exceeding 92.0% with manual annotations on the validation set. A generalization experiment on building polygon data from parts of Beijing further verified the model's transferability. Comparative analysis with an LSTM model, under similar parameter scales, showed that the LSTM model failed to converge effectively and could not produce usable results. This study confirms the potential of Transformer in handling spatial geometric sequence tasks and demonstrates its ability to effectively reuse existing simplification samples. The proposed approach offers a new, engineering-practical pathway for intelligent building polygon simplification.
Spatial similarity relations are a fundamental theory for multi-scale representation of spatial data. The evolution of basic spatial data from 2D to 3D has presented new challenges for research on spatial similarity relations. In this study, the concept of 3D spatial similarity relations is proposed, and a quantitative calculation method for 3D spatial similarity relations based on visual perception is further proposed to address the limitation of scale effect. First, building upon spatial similarity relationships in multi-scale maps, we define 3D spatial similarity relationships and summarize the challenges encountered in extending from 2D to 3D. Then, by comprehensively considering the attributes of 3D spatial data, the rendering mechanisms of 3D models, and the principles of human visual perception, an in-depth analysis of the visual perception processes, mechanisms, and characteristics of 3D spatial similarity relations is conducted. Following this, a reverse calculation strategy informed by visual perception is introduced, which converts complex calculations in 3D space into equivalent operations within 2D screen space, thereby enabling the quantitative measurement of 3D spatial similarity relations. Experimental results and practical applications demonstrate that the proposed method enables scale-adaptive quantitative calculations of 3D spatial similarity relations and offers valuable support for level of detail (LOD) construction and continuous visualization of 3D building scenes.
Underground station facilities are tightly coupled through physical connections, functional dependencies, and information interactions. During flood events, such coupling relationships can result in cascading failures, where damage to a critical facility may trigger systemic risks. Existing flood vulnerability assessment methods often regard facilities as isolated units, ignoring the coupling effects and risk transmission mechanisms, making it difficult to accurately characterize the damage propagation paths. This paper proposes a flood vulnerability cascading analysis method for underground station facilities by integrating knowledge graphs (KGs) and large language models (LLMs). First, a three-domain knowledge graph consisting of “object-behavior-state” is constructed to represent facility relationships. Second, a cellular automaton model is developed to simulate flood evolution coupled with facility component interactions. Third, flood vulnerability assessment and cascading effect inference are performed by constraining the LLMs with the knowledge graph. Finally, a large-scale underground station in Daxing District, Beijing, is selected as a case study, along with the DeepSeek-R1 series model, for experimental analysis. The results show that the proposed method can effectively identify spatial and functional changes of facilities under flood scenarios and reveal risk propagation paths. The reasoning process exhibits strong robustness and high interpretability. Compared with expert-defined benchmark cascade paths, the method achieves higher accuracy and logical consistency in terms of node matching rates and sequence matching accuracy. The findings provide theoretical support and technical reference for the formulation of emergency strategies and the enhancement of system resilience in underground stations.
Geographic entity are important data outputs of 3D realistic geospatial scenes. Due to scale differences in the spatial expression of polygonal entities with the same name across datasets from different scenarios, geographic entity matching technology is required to support data fusion and updating. Aiming at the current situation that existing polygonal entity matching methods still have room for optimization in breaking away from manual dependence and realizing refined and differentiated modeling of multiple matching relationships (1∶1, 1∶M, M∶N), this paper proposes a self-supervised matching method for polygonal geographic entity based on a three-branch attention network. First, it calculates the similarity of four types of features: size, distance, shape, and direction. For each type of feature, the standard deviation is computed based on the number of matched entity pairs under different thresholds. A loss function is constructed using this standard deviation to train the model and obtain decision thresholds, and entity pairs whose similarity meets the decision thresholds are converted into pseudo-labels. Second, a matching network with a three-branch structure is built to handle 1∶1, 1∶M, and M∶N matching relationships respectively. This network integrates the attention mechanism and gradient-weighted class activation mapping (Grad-CAM) to adaptively assign weights to each feature. Finally, two types of data—construction land and engineering project land in Huangshan city, Anhui province—are selected as experimental data to conduct verification on the pseudo-labels, feature weight assignment, and the three-branch network framework respectively. The experimental results show that compared with existing methods, the proposed method in this paper does not require manual annotation, can adaptively achieve matching for multiple matching relationships (1∶1, 1∶M, M∶N), and achieves a precision (P) of 94.98%, recall (R) of 94.22%, and F1 score of 94.60%. Its effectiveness is verified, and it can provide strong support for the fusion and updating of polygonal geographic entity data.