X-ray pulsar navigation(XNAV) is defined in this paper as deriving the 3D position of spacecraft by using pulse arrival time observations. When using priori knowledge of spacecraft orbits, the XNAV essentially becomes an orbital improvement. Observing 3 or 4 pulsars can be performed either simultaneously or sequentially in proper order; observation instrument can be either a grazing reflective telescope or a normal refractive one. The latter may have a shorter focal length and would be suitable to miniaturization and lightweight of XNAV equipment. The pulsar ephemeris is an indispensable supporting condition, which is provided today by the ground-based radio telescope network and will be generated in the future by the spacecraft itself. The accuracy of XNAV is about 5 km currently, and be expected to reach 1 km in the near future and can possibly achieve the goal of 100 m level in the long term. The XNAV can be used at present in deep space, and may probably be expanded to near-Earth space in the future.
The digital twin system of geospatial information is an important support system for geospatial information service and an important foundation for the development of intelligent society. The digital twin system of geospatial information has more special requirements in terms of accuracy, systemization and reliability, compared to other industrial digital twin systems. This paper describes the basic rules of the establishment of the geospatial digital twin system from perception, description and mapping to statistics, prediction and deduction. It is emphasized that the perception of geographic entities should be accurate, the space-time reference should be consistent, the attribute description should be correct, the historical information should be dependable, the mapping relationship should be complete, the statistic trend should be systematic, the variation prediction should be rigorous, and the auxiliary decision making should be scientific. The related research topics of the geospatial digital twins are generally classified. The problems to be paid attention are listed. Finally, the relationship between the geospatial digital twin and spatio-temporal intelligence is discussed. The basic process and key technologies in the construction of geospatial digital intelligence system are pointed out.
The integer ambiguity resolution of the reference station network is the basis of high precision positioning of network RTK. However, with the increase of the range of the reference stations, the spatial residual of atmospheric error makes it difficult to resolution reference station, especially the ionospheric delay error with complex temporal and spatial variation, which seriously affects the resolution performance of the reference station network. The atmospheric parameters are estimated by random walk when the ambiguity resolution of the reference station. Based on the analysis of the ionospheric power spectral density (IPSD) on the ambiguity resolution performance of the reference station, this paper studies the time-varying characteristics of ionospheric observations with different differential intervals. By observing the different trends of noise and ionosphere with differential time intervals, the ionospheric observation noise is weakened to determine the IPSD, and the stochastic model of ionospheric parameters in the ambiguity estimation is optimized, so as to improve the fixed efficiency of the long-distance reference station network, instead of using empirical values or empirical models that do not consider atmospheric variation. The experimental results show that the IPSD estimated in real-time by the 1 s sampling interval data can optimize float solution accuracy of the integer ambiguity of the reference station, and can also reduce the search space of the integer ambiguity. Compared with empirical ionospheric power spectral density, the method proposed in this paper can improve the convergence time by 21% in five reference station networks with a baseline length of more than 100 km, and the success rate of ambiguity resolution is also improved accordingly.
The influence of the third frequency on the positioning model needs to be considered in multi-frequency high-precision positioning. The current traditional inter frequency clock bias (IFCB) estimation method is not well adapted to real-time applications, and the reliability and accuracy are limited by the number of stations in the reference network, so the study of IFCB parameters is crucial. In this paper, we focus on studying and analyzing the impact of IFCB on multi-frequency PPP of multi-GNSS, propose a multi-frequency PPP algorithm considering the constraints of time-varying characteristics of IFCB parameters, proposed an algorithm for extracting power spectral density based on station IFCB observations, and comprehensively analyze the time-varying characteristics of IFCB and the impacts of different IFCB models on the performance of un-differential and un-combined PPP. The experimental results show that it is feasible and efficient to extract the IFCB power spectral density using station IFCB observations. Compared with the ignoring IFCB method, the PPP has the largest improvement in convergence time by 46.51% using the random walk process estimation IFCB considering the constraints of PSD, and the average improvement by 43.54% and 34.50% using the iGMAS product and CNES product, and the 3D positioning accuracy by 41.68%, 32.24% and 24.64%, respectively. The adoption of power spectral density constrained IFCB parameters with IFCB time characteristics can truly reflect the IFCB changes. Therefore, in multi-GNSS multi-frequency PPP processing, a random model process that takes the IFCB parameters to be constrained by time-varying properties can speed up convergence and improve the positioning accuracy, which is better than the product correction method and more conducive to the application of real-time multi-frequency PPP.
The current solar activity is at its peak, leading to frequent geomagnetic storms, the resulting variations in the thermospheric atmospheric density are crucial for orbit prediction and space flight safety of low-earth orbit satellites. The high-precision accelerometers carried by gravity satellites can effectively detect variations in thermospheric atmospheric density at satellite altitude. Based on data from the GRACE-FO (gravity recovery and climate experiment follow-on) gravity satellite accelerometer, we inverse the thermospheric density during four geomagnetic storm events near the vernal equinox from 2019 to 2023. We conducted a quantitative analysis for the first time on the impact of geomagnetic storms of different intensities on thermospheric density at satellite altitude and satellite orbits, and the temporal and spatial characteristics of the density variations are investigated by using the method of empirical orthogonal functions. The results are summarized as follows: ① The GRACE-FO satellite accelerometer shows a significant response to moderate to severe geomagnetic storms, with the X-axis accelerometer data, thermospheric density, and satellite orbit decay rate exhibiting “peak” phenomena. ② In the temporal dimension, variations in thermospheric density are closely correlated with the Dst index. In the spatial dimension, thermospheric density variations are generally higher in the Southern Hemisphere than in the Northern Hemisphere, and the variations become more pronounced with increasing latitude. ③ Thermospheric density variations in the sunlit region are more significantly affected by geomagnetic storms, with atmospheric temperature being one of the key factors influencing the thermospheric density variations in the sunlit region. Therefore, this study provides valuable insights into understanding the spatio-temporal variations in thermospheric density at satellite altitude and satellite orbit decay during geomagnetic storms of varying intensities, particularly around the equinox.
Three-dimensional sub-bottom profiler (3D SBP) makes up for the deficiency of two-dimensional sub-bottom profile in the spatial coverage of horizontal imaging, and makes it possible to obtain 3D information of sub-bottom with high quality. However, at present, the interference of scattered echoes in sub-bottom is serious, the spatial changes of horizons are complex, and the traditional methods can only achieve two-dimensional horizon picking and it is difficult to express the three-dimensional spatial distribution of sub-bottom. Therefore, a three-dimensional reflector structure enhancement (RSE3D) algorithm based on plate-like and non-vertical structural features is proposed to improve the effect of horizon picking. Firstly, a volume data generation method considering the spatial distribution of sampling points is presented to realize the 3D sub-bottom profile data generation. Then, the RSE3D algorithm is proposed by approximating the formation structure as a plate structure and taking into account the non-vertical structural features. The method can effectively suppress the echo interference such as scattering and highlight the effective information. On the basis of the enhanced processing results, the horizon picking is finally realized by using the threshold method. Experimental verification and analysis show the index values of horizon picking are better than 80%, which represents the superiority of the proposed algorithm in horizon picking.
In coastal regions, particularly within 3 km from the coast, the accurate estimation of sea surface height (SSH) has remained a significant challenge for satellite altimetry techniques. This study investigates the fully-focused SAR (FFSAR) altimetry technique and proposes a SSH estimation algorithm based on contamination rejection and range compensation. Tide gauge data are introduced as independent observations to validate the altimetry performance of FFSAR in severe interfered coastal areas under different along-track sampling rates (20, 80, 200, and 600 Hz). The experiment results show that: ① In the highly contaminated region within 3 km from the coast, increasing the FFSAR sampling rate from 20 Hz to 200 Hz significantly improves data availability and accuracy. ② The standard deviations of the altimetry results obtained using the proposed algorithm against the validation data are reduced from 0.36 (TGWD)、0.31 (EEMSHAVEN)、0.68 (DENHELDER) and 0.17 m (IJMON) to 0.22、0.22、0.48 and 0.14 m, respectively, corresponding to an improvement of over 20%. ③ At the 200 Hz sampling rate, the altimetry results from the proposed algorithm show better consistency with the validation data compared to the MWaPP and PP-OCOG algorithms.
GNSS-acoustic (GNSS-A) positioning technique needs to conduct in-suit sound speed profile (SSP) measurements, leading to a high cost and limiting to real-time applicability of this technique. To tackle this issue, a SSP inversion model based on a single-exponential empirical temperature profile (SETP) has been developed just recently. This contribution was to propose a novel SSP inversion model based on a double-exponential temperature profile (DETP) to improve the inversion precision. In addition, the proposed inversion model was appended with a prior constraints constructed by the marine environment product. Using the Japanese long-term seafloor geodetic observations, the superior of the proposed inversion model was validated sufficiently. The SSP inversion precision was evaluated by the in-suit SSP. It showed that the root mean square error (RMSE) of the DETP-based SSP inversion result of the whole water layer was 5.54 m/s, better than 6.92 m/s of the SETP-based SSP model, particularly in shallow and middle water layers. For water columns not exceeding 300 m depth, the mean bias, standard deviation (STD), and RMSE of DETP-based SSP inversion were 1.76, 6.36, and 6.59 m/s, respectively; while those of SETP-based SSP inversion were 2.03, 7.94, and 8.19 m/s, respectively. For the water columns from 300 m to 500 m, the mean bias, STD, and RMSE of DETP-based SSP inversion were 0.07, 3.18, and 3.18 m/s, respectively; while those of SETP-based SSP inversion were -2.76, 3.75, and 4.65 m/s, respectively. Moreover, the seafloor geodetic positioning based on the proposed DETP was more accurate than that based on SETP. Specifically, the positioning mean bias and STD in the horizontal direction are better than 0.2 mm and 2 mm, respectively, while those in the vertical direction are better than 3 mm and 2 cm, respectively. These indicate that the proposed DETP-based SSP can achieve a centimeter-precision-level positioning precision and improves the SSP inversion precision, significantly.
Three-dimensional (3D) maps are crucial for early-warning construction safety and long-term safety maintenance of tunnels. However, generating an accurate 3D point cloud map in tunnels characterized by sparse textures, rough structures, and dynamic interference poses a challenging task. This paper proposes a 3D tunnel mapping method to generate point cloud maps of extremely long and noisy scenes. First, a registration residual compensation model is proposed to eliminate the registration errors caused by rough surface structures. The K-means clustering method is used to identify non-planar surface structures, and compensation is carried out based on local region residual. Then, a spatial constraint strategy based on view field maximization is proposed to eliminate point cloud errors caused by absolute measurement deviations. To verify the performance of the proposed method, we conducted experiments during the secondary lining and pipeline laying stages in both drilling and blasting and shield tunnels. The results indicate that the proposed method outperforms the methods of FAST-LIO2, Faster-LIO, and LiLi-OM in terms of both trajectory estimation and map accuracy. Additionally, ablation experiments were conducted to elucidate the contributions of different models in 3D mapping of tunnels.
In the domain of landslide susceptibility evaluation, the prevalent issue of sample category imbalance often skews evaluation outcomes in favor of the majority class, thereby undermining the accuracy of landslide forecasts. Sample optimization emerges as a pivotal solution to mitigate these biases. Traditional approaches to sample optimization predominantly concentrate on distinguishing the characteristic disparities between positive and negative samples within the feature space, overlooking the critical aspects of geographical disparities among samples and the intricate nonlinear interrelations between characteristic factors and landslide occurrences. Such oversight may result in a biased and oversimplified representation of sample characteristics. Addressing this gap, the present study introduces an advanced landslide susceptibility evaluation methodology that integrates sample optimization with a nuanced consideration of spatial and feature dynamics. Initially, the method employs an undersampling strategy based on geographical environment similarity criteria that incorporate spatial correlation. Subsequently, it innovates a nonlinear synthetic oversampling technique to augment sample diversity. The analytical process culminates in the application of a multi-grained cascade forest model for the prediction of landslide susceptibility. Focusing on Yibin city as the empirical case study, the efficacy of the proposed method is rigorously validated through statistical metrics, encompassing both model precision verification and susceptibility zoning analysis. Comparative evaluation against nine established methodologies reveals that our proposed framework consistently surpasses in predictive accuracy across varied scenarios of positive sample deficiency, thereby offering susceptibility zones that exhibit a higher degree of concordance with the real-world spatial distribution of landslide incidents.
Effective evaluation of the geometric positioning accuracy of satellite laser altimetry products is the basis and prerequisite for safeguarding their subsequent applications. After the satellite is in orbit, the laser optical axis pointing and platform stability will change over time, leading to deviations in the positioning position of the laser spot on the ground. Aiming at the working mode of the laser altimetry system of the Gou Mang satellite, this paper first determines the spot position on the laser footprint camera and the optical axis pointing, and proposes an ellipse fitting method based on the maximum gradient of spot energy (MG-EFM) to analyze the stability of the laser emission pointing after the satellite is in orbit. Then, the position of the laser footprint point on the ground is calibrated using high-precision terrain data to analyze the geometric positioning accuracy of the laser. The experimental results show that: ①The MG-EFM method extracts the center of the laser spot with an accuracy of better than 0.1 pixel, which can be used for monitoring the stability of the laser optical axis pointing and effectively identifying the drift error of the laser optical axis in the short term. ②The laser geometric positioning of the Gou Mang in the Northeast Tiger and Leopard National Park area is relatively stable. However, it has been found that with variations in the satellite measurement environment and observation time, the laser spot positioning exhibits time-varying error distribution characteristics, especially there is an inconsistency of about 10 m in the ground positioning of the laser spots observed during day and night. To meet the operational application requirements of satellite laser altimetry products, ground control data should be added to enhance the geometric stability of the products.
Flow of urban drainage networks are critical indicators of their operational efficiency and safety. Accurately forecasting these parameters is crucial for risk mitigation, performance enhancement, and layout planning of the networks. Traditional flow forecasting methods typically overlook the complex multidimensional spatial dependencies between the flows in pipelines. This paper proposes a multi-view spatio-temporal graph convolutional networks model that considers both the spatial proximity and attribute similarity of network nodes. It constructs the nearest neighbor-based graph and the flow similarity-based graph, utilizes spatio-temporal graph convolutional networks to uncover intrinsic dependencies, and applies an attention mechanism to merge features from multiple views for enhanced flow predictions. Experiments with historical flow data from an urban drainage network confirm the superior predictive capabilities of our model, with ablation studies validating the contributions of different views.
Vector polygon clipping is one of the important and commonly used basic functions in the field of GIS. The explosive growth of geographic spatial data scale has put higher demands on the computational efficiency of traditional clipping algorithms. Currently, vector polygon clipping algorithms are increasingly showing characteristics of computation intensive and data intensive. In response to this phenomenon, this article optimizes the data structure of the vector polygon clipping Vatti algorithm and applies GPU multi-core parallel technology to the construction of ordered scanning beams. When the block size is 128 and the number of clipped polygon vertices reaches 3 276 800, the acceleration ratio of the polygon clipping Vatti algorithm reaches 2.55 times under the intersection operator operation. On this basis, this article further utilizes computing resources based on CPU multi-threaded parallel technology and proposes an optimization method called vertex segmentation hybrid parallel (VSHP) that divides the dataset by the number of vertices and achieves hybrid parallel acceleration. For a large-scale dataset, the main thread reads the data and divides the raw data based on the number of vertices and threads, and then hands it over to each thread for calculation. During this process, the GPU is sequentially called to filter the minimum bounding rectangle and accelerate the construction of ordered scanning beam segments. Finally, when all data tasks are calculated, the main thread completes the merging and output of the results. Experiments have shown that ideal acceleration can be achieved when calling 4 threads, and a maximum acceleration ratio of 19.15 times can be achieved when enabling 16 threads under dynamic scheduling strategy. This article attempts to achieve desktop supercomputing with low energy consumption through mixed parallel computing of CPU-GPU, aiming to provide some reference value for traditional vector polygon clipping algorithms to achieve high-performance acceleration optimization on low-cost consumer hardware platforms and reduce the cost of large-scale applications of geographic information data.
Vector geographic data can be shared and used only after the geometric position accuracy is reduced by decryption methods, and none of the existing decryption methods are able to quantitatively analyze the security of the methods and the utility of the decrypted data. This paper is the first to combine differential privacy technology to decryption vector geographic data, and innovatively proposes a differential privacy-based method for vector geographic data decryption (DP-VGS), which combines the existing decryption model of nonlinear transformation and differential privacy. Firstly, through the division and aggregation of sensitive regions and the allocation of the decryption security budget, the regions with high sensitivity are made more secure after decryption. Secondly, a decryption model noise protection method based on function perturbation and TrunLap mechanism (FM-TL) is designed to improve the utility of decrypted data. Theoretical analysis demonstrates that DP-VGS satisfies differential privacy, which means that the security and error upper bound can be obtained by giving the decryption security budget, and DP-VGS is compatible with most of the existing decryption models. Experimental results on four real datasets show that the security of DP-VGS achieves the goal of optimizing the security and availability of the decrypted data.
Multi-scale mesh river system matching is an important part of river system data integration, fusion and update. In view of the fact that the existing mesh river system matching methods do not pre-identify the matching patterns and lack a targeted matching strategy, this paper proposes a multi-scale mesh river system classification matching method based on graph neural networks. Firstly, constructing the large-scale mesh river system as a graph structure, label the matching patterns between it and the small-scale river system as nodes, and compute the node features; and then the graph neural network is used to sample and aggregate the node features to establish the mapping relationship between the river segment features and matching patterns; finally, according to the category of the matching patterns of each river segment in the river system, the matching strategy is adopted accordingly. The experimental results show that the method in this paper effectively improves the matching accuracy of the mesh river system, and has good theoretical and application value.
With the development of mixed reality (MR), a new type of map, the virtual-real fusion map, has emerged. Dynamic symbols are widely used in the existing virtual-real fusion maps, but there is a lack of research on the usability of dynamic symbols for user cognition, and the adaptability of symbol parameters needs to be verified. Taking emergency navigation as the application scenario, this study proposes a virtual-real fusion dynamic symbol design method, and use eye tracking technology to carry out visual cognition experiments, respectively, to explore the appropriate strategies for dynamic symbol allocation from three levels of static and dynamic modes, rotation, jumping and scaling change effects, and motion speed gradations. The experimental results show that dynamic symbols has similar cognitive effects as static symbols; among the three change modes of symbols' dynamic rotation, jumping and scaling, the scaling effect performs better in information processing and visual search comparison; among the gradations of dynamic symbols, 2~4 grades is more appropriate, and as the number of gradations increases, the difficulty of recognizing, processing and memorizing the symbols increases significantly.