Loading...

Table of Content

    17 February 2025, Volume 54 Issue 1
    Geodesy and Navigation
    Positioning performance analysis and evaluation for standalone BDS receivers
    Chuang SHI, Chenlong DENG, Lei FAN, Fu ZHENG, Tao ZHANG, Yuan TIAN, Guifei JING, Jie MA
    2025, 54(1):  1-13.  doi:10.11947/j.AGCS.2025.20240127
    Asbtract ( 6 )   HTML ( 0 )   PDF (5525KB) ( 10 )  
    Figures and Tables | References | Related Articles | Metrics

    China's BeiDou navigation satellite system (BDS) has completed its global constellation establishment and began to provide positioning, navigation, and timing (PNT) services to global users. Based on the early principle of multi-system compatibility and interoperability, in the current market all the mainstream GNSS receivers support multi-system satellite signal reception. In order to improve the autonomy and security of the BDS PNT services, the government departments have issued opinions on accelerating the research, development and promotion, utilization of homemade standalone BDS positioning terminals. Since the standalone BDS receiver can no longer rely on the guidance of other system's signals during signal acquisition, its hardware and positioning performance may be changed, thus it is urgent to evaluate the navigation and positioning performance of the homemade standalone BDS receiver. In this paper, the M300 Pro standalone BDS receiver is selected to carry out a series of test and evaluation experiments, and the hardware performance of the receiver such as time to first fix (TTFF), signal quality and observation noise are evaluated first. Then the positioning performance such as station coordinate estimation, single point positioning (SPP), precise point positioning (PPP), static baseline solution and real-time kinematic (RTK) positioning are analyzed and discussed by using the self-developed BDS precise data processing software platform named space Geodetic spatio-temporal data analysis and research software (GSTAR). The experimental results show that the cold TTFF of the selected standalone BDS receiver is lower than 40 s, the average ratio of the intact observation data is more than 95%, and standard deviations of pseudorange and carrier phase measurement noise are 0.051 7 m and 0.003 4 cycles, respectively, which is basically consistent with the hardware performance of multi-GNSS receivers at home and abroad. Using the selected standalone BDS receiver, the single-day solution precision of the horizontal directions of station coordinates is 3.5 mm and the up direction is 9.9 mm; the precision of 2.208 m in horizon and 2.502 m in vertical can be realized with single-epoch pseudorange SPP; the precision of horizontal directions of kinematic PPP with ambiguity resolution (PPP-AR) is better than 3 cm and the up direction is better than 5 cm; the convergence time of the PPP-AR is better than 27 min; the repeatability accuracy of single-day solution for baselines shorter than 20 km is better than 0.7 cm in the horizontal direction and 1.8 cm in the vertical direction, and the RTK positioning accuracy for short baselines will not exceed 3 cm in the horizontal direction and 5 cm in the vertical direction. The homemade standalone BDS receiver has initially possessed the ability to independently provide reliable high-precision positioning services.

    Analysis of heavy rainstorm in Beijing in 2023 based on GNSS observations
    Fei YANG, Yingying WANG, Zhicai LI, Boyao YU, Junli WU, Yunchang CAO, Shu ZHANG
    2025, 54(1):  14-25.  doi:10.11947/j.AGCS.2025.20230548
    Asbtract ( 9 )   HTML ( 2 )   PDF (9638KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    By the end of July 2023, Beijing and its surrounding areas were severely impacted by an extreme rainstorm, which was the result of a combination of typhoon “Doksuri” “Khanun”, and geographic factors. The precipitable water vapor (PWV) is one of the key factors influencing rainfall, to explore its relationship with rainfall in different process of the rainstorm is of great significance for further establishment of a rainstorm warning model. In this study, 34 GNSS stations, 34 meteorological stations, 1 radiosonde station and ERA5 datasets in and around Beijing were utilized, the GNSS-PWV data with high accuracy from July 25th, 2023 to August 1st, 2023 were obtained using GAMIT 10.71. An improved interpolation algorithm has been proposed to retrieve gridded PWV data with a high spatiotemporal resolution. Then, the accuracy of the GNSS-PWV was evaluated from multiple perspectives using the radiosonde and ERA5 data as references. Finally, the relationship between the PWV variation and extreme rainfall and the relationship between tropospheric delay gradient and rainfall trend are analyzed from the perspective of time and space by the combination of the rainfall data from the meteorological stations. Results showed that the correlation coefficient between GNSS-PWV and RS-PWV was up to 0.99, the root mean square error (RMSE) and bias were about 0.52 mm and -0.52 mm, respectively. In the comparison with ERA5 data, the RMSE of GNSS-PWV is less than 6 mm and the bias range is -4~1.5 mm, the gridded PWV has a RMSE of about 4 mm and a bias about 1 mm. The spatiotemporal analysis shows that the PWV increases sharply before the occurrence of this rainstorm, keeps increasing during the rainstorm, and could not dissipate immediately after the end of the rainstorm. This phenomenon is related to the co-influence of “Doksuri” and “Khanun”. In addition, the tropospheric delay gradient at each station shows a consistent northeastward direction, which is consistent with the transport trend of PWV high value from southwest to northeast in space, and is consistent with the actual precipitation route.

    GNSS/SINS integrated navigation method considering the geometric property of biases state
    Yarong LUO, Chi GUO, Wei OUYANG, Jingnan LIU
    2025, 54(1):  26-39.  doi:10.11947/j.AGCS.2025.20240232
    Asbtract ( )   HTML ( )   PDF (1866KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In the current design of invariant extended Kalman filter (EKF), biases states are not included in the kinematic equations with geometric properties due to that the kinematic systems of strapdown inertial navigation system (SINS) with biases no longer have group affine properties. This article constructs a group and a group action to satisfy the equivariant property of the kinematic equation containing biases state, which can naturally handle the problem of inertial based integrated navigation systems containing gyro bias and acceleration bias, and the linearization error of navigation state error dynamics can be reduced theoretically. Although the integrated navigation based on invariant EKF has received widespread attention, there has been few research on GNSS/SINS tightly coupled integrated navigation based on equivariant error in the world frame. Therefore, this article proposes a GNSS/SINS tightly coupled integrated navigation system based on equivariant errors in the world frame. Unlike the currently popular invariant EKF, the equivariant error constructed in this paper is based on a symmetry that appropriately includes all states in the group structure. The experimental results show that the filtering algorithm proposed in this paper has better transient response under different large misalignment angles compared to the right invariant EKF. At the same time, the robust model in the tightly coupled integrated navigation in the world frame effectively improves filtering robustness.

    Simulation and accuracy analysis of real-time underwater gravity measurement data
    Hongfa WAN, Shanshan LI, Xinxing LI, Haopeng FAN, Xuli TAN
    2025, 54(1):  40-51.  doi:10.11947/j.AGCS.2025.20230488
    Asbtract ( )   HTML ( )   PDF (4890KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Gravity assisted inertial navigation is one of the important means to achieve long-term, autonomous, covert, and precise navigation of underwater vehicles. The acquisition of real-time measurement data from underwater gravity sensors and the compensation of measurement data errors are key issues that need to be focused on when applying gravity navigation to practical engineering. Due to various limitations in the implementation of underwater gravity measurement experiments, there is a lack of real-time underwater gravity measurement values that conform to actual physical characteristics, resulting in cognitive biases in their characteristics and accuracy levels, which in turn affects the performance of gravity assisted inertial navigation. The simulation of real-time underwater gravity measurement data is based on the performance characteristics of existing ocean gravimeters and inertial navigation components to simulate, process, and evaluate the accuracy of real-time underwater gravity observation. It can reproduce the physical process of real-time acquisition of underwater gravity data in a laboratory environment. In the experiment, the effects of different velocity errors, latitude errors, and azimuth angles on gravity measurement at different latitudes were analyzed, and a 24 h underwater dynamic gravity measurement process was simulated. The results showed that under the selected inertial components and gravimeter parameters, the measurement accuracy of gravity anomalies in free space was 3.5 mGal, This provides effective support for further validation and optimization of key technologies and algorithm models for gravity assisted inertial navigation.

    Photogrammetry and Remote Sensing
    A block-wise polynomial distortion model for airborne composite large-format camera
    Zuxun ZHANG, Xinbo ZHAO, Yansong DUAN
    2025, 54(1):  52-63.  doi:10.11947/j.AGCS.2025.20240273
    Asbtract ( )   HTML ( )   PDF (6507KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The airborne composite large-format camera has gained widespread popularity in remote sensing and surveying due to its high resolution and wide coverage. However, various complex distortions arise during the imaging process owing to factors such as manufacturing processes and assembly precision, which impact image quality and the accuracy of geometric processing. This paper proposes a block-wise polynomial distortion model tailored to the characteristics of mosaic aerial frame cameras. The core idea of this model is to divide the imaging plane into multiple sub-image blocks guided by the residual vector field from Brown's distortion model calibration, and describe distortions within each sub-block using polynomials, thereby effectively correcting various complex distortions. Additionally, a cloud-controlled calibration scheme is designed to solve for the block-wise polynomial parameters. Taking the AFC-900 camera developed by the Beijing Institute of Aerospace Mechatronics as the research subject, the paper conducted cloud-controlled calibration of the camera in Zhaodong. Subsequently, production verification was carried out in three survey areas: Zhaodong, Jiexiu, and Miluo. The results demonstrate that the distortion model proposed in this paper can correct distortions in the AFC-900 camera to within 0.5 pixels, and the accuracy of the production outcomes meets the specifications for large-scale mapping at scales of 1∶500 and 1∶2000.

    Registration of aerial images and LiDAR point clouds based on distance field and plane constraints
    Yongjun ZHANG, Changjun ZHU, Siyuan ZOU, Xinyi LIU, Qingzhou MAO, Yi WAN
    2025, 54(1):  64-74.  doi:10.11947/j.AGCS.2025.20240122
    Asbtract ( )   HTML ( )   PDF (4702KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In the field of Earth observation, airborne optical images and airborne light detection and ranging (LiDAR) point clouds are the main data sources for acquiring geo-spatial information. Accurate geometric registration is the prerequisite for the fusion of the two sources of data. In this paper, a modified registration method for airborne LiDAR point clouds and aerial images based on distance field and plane constraints is proposed. This method is divided into two stages: single image registration based on line distance field and bundle block adjustment based on line and plane constraints. In single image registration, line features are extracted from aerial image and airborne LiDAR point cloud respectively, and then distance field is constructed based on the line features of aerial image, and the point cloud line features are projected to the image plane. The global cost of point cloud projection line features in distance field is minimized by progressive robust solution, so that single image can be registered with LiDAR point cloud. In the bundle block adjustment, key frames are selected based on the density of line feature distribution. Subsequently, the features of the conjugate line in key frames are matched to extract the control points as horizontal and elevation constraints. Moreover, the distance between the image tie point and the nearest horizontal plane is used as elevation constraints. The experimental results show that the registration accuracy of the proposed method is close to the average point distance. The proposed method can realize robust registration under the condition of poor initial values, and the registration accuracy is significantly superior to the iterative closest point (ICP) registration method and the registration strategy of cross-modal matching.

    Urban and rural road surface extraction network based on road topological correlation features
    Yanjun WANG, Xuchao TANG, Cheng WANG, Hengfan CAI
    2025, 54(1):  75-89.  doi:10.11947/j.AGCS.2025.20240124
    Asbtract ( )   HTML ( )   PDF (7269KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Deep learning methods have become the mainstream technology for classifying and extracting urban and rural road networks based on remote sensing image data. However, the existing methods suffer from issues such as mixed occlusion of neighboring objects (such as vegetation and buildings), long model training time, and high computational complexity, and most of them only focus on independent targets such as road surface, edge line, and center line, resulting in low accuracy of road classification extraction results. In order to fully utilize the constrains of the spatial topological relationship features between road edges and road surfaces, this paper proposes a road surface extraction network, CAS-DeepNet, based on the road topological correlation feature information. Firstly, based on the DeepLabV3+network architecture, the lightweight MobileNetV2 feature extraction network is improved, and the edge enhancement module based on residual connection is embedded to capture road edge information. Secondly, a CS-ASPP structure based on dense connections is designed to improve the model performance. Then, the channel attention mechanism is introduced to effectively fuse multiple branch channels in the image to improve the feature representation ability. Finally, the road connectivity constraints are constructed through the topological association information of road edges to enhance the integrity of the road network extraction results. The experimental results on datasets, such as CHN6-CUG and DeepGlobe, show that the CAS-DeepNet designed in this paper has more advantages over popular methods, such as U-Net++, DeepLabV3+, D-LinkNet, RoadNet, ACNet, and SDUNet, in terms of accuracy rate, recall rate, F1 score, and overall accuracy rate, significantly improving the accuracy and completeness of the extraction results of road network. This road surface fine extraction method based on road topology correlation features proposed in this study can provide basic support for natural resource survey and monitoring, as well as geospatial environment perception modeling.

    A lightweight rotation-invariant network for LiDAR-based place recognition
    Zhenghua ZHANG, Guoliang CHEN
    2025, 54(1):  90-103.  doi:10.11947/j.AGCS.2025.20230302
    Asbtract ( )   HTML ( )   PDF (8307KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Accurate place recognition using LiDAR is critical for achieving global localization in domains such as autonomous driving or robot navigation. While the state-of-the-art methods focus on enhancing the feature encoding capability, they often neglect the essential requirement of maintaining rotation invariance in the place recognition process. Additionally, these methods suffer from challenges, such as the large model size and their dependence on complex preprocessing procedures. To address these challenges, this paper introduces RIP-Net, a super lightweight neural network designed for rotation-invariant place recognition of point clouds. Firstly, RIP-Net gathers point clusters of local regions and constructs basic rotation-invariant features; Secondly, the residual structures and attention mechanism are employed to enhance the perception of local regions by incorporating multi-scale information; Finally, we utilized the generalized-mean pooling function to aggregate global feature, and place recognition is achieved based on the feature distance. The experimental results on 4 large-scale point cloud datasets demonstrate that RIP-Net not only achieves rotation invariance but also outperforms existing methods in terms of various accuracy metrics. Moreover, the parameter number of the IR-Net is 0.3 million, which is significantly lower compared to existing methods. Experimental results also demonstrate that RIP-Net can achieve accurate place recognition directly using large-scale raw point clouds without any data preprocessing steps. These findings underscore the practical value and promising applications of the proposed RIP-Net method.

    Multi-branch network assessment and dynamic change analysis of wide-area landslide susceptibility
    Jichao LÜ, Rui ZHANG, Xu HE, Ruikai HONG, Age SHAMA, Guoxiang LIU
    2025, 54(1):  104-122.  doi:10.11947/j.AGCS.2025.20240014
    Asbtract ( )   HTML ( )   PDF (27461KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In response to the issue where convolutional neural networks (CNN) in landslide susceptibility assessments overly focus on specific factors due to data channel stacking, this study proposes a multi-branch data fusion model for landslide susceptibility assessment. The model integrates multi-source remote sensing data features through a multi-branch structure and an adaptive weighting mechanism, and then it leverages deep CNNs to fully extract the semantic information of evaluation factors for accurate landslide susceptibility assessment. The experiment selected the southeastern Qinghai-Tibet Plateau as a typical study area, and comparative analyses with the random forest, shallow CNN, and ResNet101 models showed that the proposed multi-branch network model offers superior performance in wide-area landslide susceptibility assessment. The model outperforms existing models in accuracy, precision, recall, F1 score, area under the curve (AUC), and frequency ratio accuracy, with respective values of 0.88, 0.89, 0.92, 0.90, 0.92, and 0.97. Based on these results, this study further investigates the intrinsic relationship between fluctuations in environmental factors, such as vegetation and rainfall, and changes in the landslide susceptibility index by combining the landslide susceptibility assessment results from five consecutive years. The study also explored temporal and spatial variations in the landslide susceptibility index using the coefficient of variation. The results indicate that in the past five years, the Minjiang-Daduhe River Basin, Yalong River Basin, and Yarlung Tsangpo River Basin have experienced a trend of first increasing and then decreasing landslide risk, influenced by the normalized difference vegetation index (NDVI) and local rainfall variations. In contrast, the Jinsha River Basin and Nujiang-Lancang River Basin have shown smaller fluctuations in vegetation and rainfall, with landslide susceptibility levels generally remaining at medium to high-risk levels. The proposed model in this study can be used as references for landslide risk assessments in similar regions.

    Remote sensing scene retrieval method based on scene graph
    Jiayi TANG, Xiaochong TONG, Chunping QIU, Yaxian LEI, Yi LEI, Haoshuai SONG
    2025, 54(1):  123-135.  doi:10.11947/j.AGCS.2025.20230439
    Asbtract ( )   HTML ( )   PDF (4721KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Currently, most remote sensing scene retrieval is based on deep feature similarity matching of remote sensing images, which makes it difficult to directly represent the relationship information between scene entities and lacks a way to directly express spatial structure and semantics. Therefore, it cannot meet the complex retrieval requirements of users for remote sensing scene. This paper proposes a remote sensing scene retrieval method based on scene graph, which uses a graph neural network to map the scene graph data corresponding to the remote sensing scene to graph level feature vectors. The matching results of the graph feature vectors are used to reverse the remote sensing scene retrieval results. To train the graph neural network, this paper has created a dataset of 2380 pairs of remote sensing scene graphs, including 24 types of entities, 8 types of topological spatial relationships, and 9 types of directional relationships that have a structured representation of spatial relationships in remote sensing scenes. The spatial topological and orientational information is complete. The experiment shows that the remote sensing scene retrieval results based on scene graphs have high retrieval accuracy in entity categories, topological relationships, and orientation relationships. Especially compared with several representative international remote sensing scene retrieval methods, the scene retrieval accuracy indicators in topological and orientation relationships obtained by this method have a great improvement.

    A lightweight remote sensing images change detection network utilizing spatio-temporal difference enhancement and adaptive feature fusion
    Liangxiong GONG, Xinghua LI, Yuanming CHENG, Xingyou ZHAO, Renping XIE, Honggen WANG
    2025, 54(1):  136-153.  doi:10.11947/j.AGCS.2025.20240299
    Asbtract ( )   HTML ( )   PDF (11572KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    To address the limitations in existing change detection methods of remote sensing images, such as insufficient utilization of multi-temporal difference features and inadequate multi-scale feature fusion, a lightweight remote sensing images change detection network named SEAFNet is proposed, which integrates spatio-temporal difference enhancement with adaptive feature fusion. This paper designs the lightweight spatio-temporal difference enhancement module, which employs a dual-branch structure with semantic change perception and spatial change perception. This module combines a semantic adaptive enhancement mechanism and a mixed attention mechanism to enhance the space-spectrum differences in the bi-temporal feature maps. To further refine the edges of the change regions, different scale feature maps are optimized through an edge refinement residual module. The bi-directional feature fusion pyramid structure is also improved by using learnable weight parameters to quantify the importance of features at different scales, achieving effective multi-scale feature fusion. Comparative experiments with ten mainstream change detection methods on WHU-CD, LEVIR-CD, SYSU-CD and SECOND datasets demonstrate that SEAFNet outperforms these methods in qualitative and quantitative analysis, and the balance between network complexity and accuracy.

    Self-supervised learning based urban functional zone classification by integrating optical remote sensing image-OSM data
    Jialing LI, Ji QI, Weipeng LU, Chao TAO
    2025, 54(1):  154-164.  doi:10.11947/j.AGCS.2025.20240067
    Asbtract ( )   HTML ( )   PDF (6529KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Rapid and accurate classification of urban functional zones (UFZs) provides a scientific basis for urban planning and management and helps to realize sustainable urban development. Although optical remote sensing images provide rich visual information, they cannot fully reflect social attributes and are prone to semantic ambiguity. Therefore, more studies have tried to jointly use data containing urban social attributes (e.g., OSM data) and optical remote sensing images to achieve complementary effects. However, this idea faces two main challenges: first, there are data structure differences between optical images and OSM data, and traditional fusion methods lack sufficient interaction and fusion in the feature extraction stage, which makes it difficult for the model to fully learn the complementary advantages between the data. Second, with the increase of data modalities used for model learning, more manually labeled data are required to train a stable model, but this significantly increases the labor cost of UFZ classification model application. In response to the above problems, this paper proposes a self-supervised learning based urban functional zone classification method by integrating optical remote sensing image-OSM data. On the one hand, OSM data are unified with optical images in terms of spatial distribution and data structure, and then feature extraction and interactive fusion are carried out in a unified multimodal fusion coding architecture to learn cross-modal generalized representations. On the other hand, in this paper, a self-supervised model is used to pre-train on large-scale unlabeled data, and then a small amount of labeled data is used to transfer the model to a specific UFZ classification task, thus reducing the labor cost. The performance advantages of this paper's method over existing mainstream methods are demonstrated by conducting UFZ classification experiments in three large-scale regions, Beijing, Los Angeles and London.

    Cartography and Geoinformation
    Symbols of narrative maps: compositional structure and working mechanism
    Shiliang SU, Zichun LI, Qingyun DU, Qianqian LI, Mengjun KANG, Min WENG
    2025, 54(1):  165-181.  doi:10.11947/j.AGCS.2025.20240175
    Asbtract ( )   HTML ( )   PDF (4629KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Nowadays, maps as important media for spatial practice have widely and deeply participated in social construction. Under such circumstances, narrative maps have become the academic frontier in contemporary cartography. However, due to the significant differences in theoretical paradigms and representation mechanisms between narrative maps and “scientific” maps, the theoretical and methodological underpinnings for the symbols of “scientific” maps do not suit to work for narrative maps. With an attempt to fill in these gaps, this study first, referring to the basic theoretical principles of modern semiotics, constructs the symbol system for narrative maps from three aspects, namely structure, semantics, and pragmatics. To be specific, we unfold the visual variables of different types of symbols in narrative maps, analyze the semantic characteristics of symbols, and explore the intertextual relationship of symbols. Following, the working mechanism of the symbol system is unraveled in two major points. On the one hand, the grammar rules for narrative map “texts” to aggregate meanings are proposed in reference to the structuralist symbol theory. One the other hand, guided the “context” theory of social semiotics, the regulatory mechanism of context is clarified through highlighting the roles of intertextual context, situational context and cultural context. This paper is believed to provide new theoretical insights into narrative cartography.

    A road intersection recognition method in crowdsourced trajectory data by fusing visual features and motion features
    Jianbo TANG, Zhiyuan HU, Ju PENG, Heyan XIA, Junjie DING, Yuyu ZHANG, Xiaoming MEI
    2025, 54(1):  182-193.  doi:10.11947/j.AGCS.2025.20240101
    Asbtract ( )   HTML ( )   PDF (4886KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    With the rapid development of mobile positioning technology, crowdsourced vehicle trajectory data has become an important data source for map construction and real-time update of road network maps. Road intersections are the key nodes of a road network in path planning. Accurate identification of road intersections in trajectory data is an important basis for constructing navigation road maps based on crowdsource trajectory data. At present, the road intersection recognition methods based on crowdsourced trajectory data are mainly divided into motion feature-based methods, visual feature-based methods, and deep learning-based methods. Due to the differences in the shape and size of intersections and the heterogeneity of the density distribution of crowdsourced trajectory data, it is still difficult to extract road intersections accurately and completely under different data scenarios (such as areas with sparse data and areas containing dense distributed intersections) by using a single strategy and method, which leads to problems such as omission or wrong recognition of intersections. Therefore, based on the idea of combinatorial optimization, this paper proposes a road intersection recognition method in crowdsourced trajectory data by fusing visual features and motion features. This method first extracts vehicle motion features to recognize road intersections, and then mimics human visual cognitive process to realize road intersection recognition in different complex scenes by fusing motion features and visual features. Experimental results on trajectory datasets in Chengdu and Wuhan show that compared with the existing representative methods, the proposed method has significantly improved the accuracy and recall rate of road intersection recognition.

    Crowdsourcing extraction method for refined lane-level road information by integrating public on-board image with GNSS trajectory
    Zhengyang CAO, Huazu ZHANG, Zilong ZHAO, Heng QI, Luliang TANG
    2025, 54(1):  194-205.  doi:10.11947/j.AGCS.2025.20240246
    Asbtract ( )   HTML ( )   PDF (5926KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    High-precision and up-to-date road maps are crucial for autonomous driving and smart transportation fields. However, existing road extraction methods face challenges such as high data collection costs and long update cycles, failing to meet the high spatio-temporal resolution requirements of intelligent transportation systems. On-board CCD cameras and GNSS sensors, commonly equipped on public vehicles, provide low-cost and widespread crowdsourcing spatio-temporal data, effectively compensating for the shortcomings of traditional mapping methods. In this paper, we propose a low-cost and efficient crowdsourcing method for extracting fine-grained lane information by integrating public on-board images with GNSS trajectories. First, lane markings are detected in on-board images using a cross-layer refinement network. Second, we propose an innovative approach to identify the absolute position of lane markings, which transforms lane markings from perspective space to real-world space. Finally, a fine-grained lane information extraction module that integrates crowdsourcing images with GNSS trajectories is designed, generating the high-precision, fresh and rich semantic lane-level map. Experiments conducted on a real-world dataset from Shanghai, China, demonstrated that the generated lane-level map achieved an accuracy of 87.43% within a 1meter range. These results indicate that the proposed method holds significant promise as a novel, low-cost, up-to-date, and wide-coverage crowdsourced mapping approach. It offers a short-cycle and cost-effective solution for acquiring refined lane information.