A joint network of point cloud and multiple views for roadside objects recognition from mobile laser point clouds
FANG Lina, SHEN Guixi, YOU Zhilong, GUO Yingya, FU Huasheng, ZHAO Zhiyuan, CHEN Chongcheng
2021, 50(11):
1558-1573.
doi:10.11947/j.AGCS.2021.20210246
Asbtract
(
)
HTML
(
)
PDF (14028KB)
(
)
References |
Related Articles |
Metrics
Accurately identifying roadside objects like trees, cars, and traffic poles from mobile LiDAR point clouds is of great significance for some applications such as intelligent traffic systems, navigation and location services, autonomous driving, and high precision map. In the paper, we proposed a point-group-view network (PGVNet) to classify the roadside objects into trees, cars, traffic poles, and others, which utilize and fuse the advanced global features of multi-view images and the spatial geometry information of point cloud. To reduce redundant information between similar views and highlight salient view features, the PGVNet model employs a hierarchical view-group-shape architecture to split all views into different groups according to their discriminative level, which uses the pre-trained VGG network as the bone network. In view-group-shape architecture, global-level significant features are further generated from group descriptors with their weights. Moreover, an attention-guided fusion network is used to fuse the global features from multi-view images and local geometric features from point clouds. In particular, the global advanced features from multi-view images are quantified and leveraged as the attention mask to further refine the intrinsic correlation and discriminability of the local geometric features from point clouds, which contributions to recognize the roadside objects. We have evaluated the proposed method on five different mobile LiDAR point cloud data. Five test datasets of different urban scenes by different mobile laser scanning systems are used to evaluate the validities of the proposed method. Four accuracy evaluation metrics precision, recall, quality and Fscore of trees, cars and traffic poles on the selected testing datasets achieve (99.19%,94.27%,93.58%,96.63%),(94.20%,97.56%,92.02%,95.68%),(91.48%,98.61%,90.39%,94.87%), respectively. Experimental results and comparisons with state-of-the-art methods demonstrate that the PGVNet model is available to effectively identify roadside objects from the mobile LiDAR point cloud, which can provide data support for elements construction and vectorization in high precision map applications.