[1] BORJI A. Boosting bottom-up and top-down visual features for saliency estimation[C]//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA: IEEE, 2012: 438-445. [2] NOTHDURFT H C. Salience of feature contrast[M]//Neurobiology of Attention. Amsterdam: Elsevier, 2005: 233-239. [3] HENDERSON J M, HOLLINGWORTH A. High-level scene perception[J]. Annual Review of Psychology, 1999, 50(1): 243-271. [4] CADUFF D, TIMPF S. On the assessment of landmark salience for human navigation[J]. Cognitive Processing, 2008, 9(4): 249-267. [5] LIAO Hua, DONG Weihua, PENG Chen, et al. Exploring differences of visual attention in pedestrian navigation when using 2D maps and 3D geo-browsers[J]. Cartography and Geographic Information Science, 2017, 44(6): 474-490. [6] LIAO Hua, DONG Weihua. An exploratory study investigating gender effects on using 3D maps for spatial orientation in wayfinding[J]. ISPRS International Journal of Geo-Information, 2017, 6(3): 60. [7] OHM C, MVLLER M, LUDWIG B, et al. Where is the Landmark? Eye tracking studies in large-scale indoor environments[C]//Proceedings of the 2nd International Workshop on Eye Tracking for Spatial Research (in conjunction with GIScience 2014). Vienna, Austria: CEUR Workshop Proceedings, 2014: 47-51. [8] LIAO Hua, DONG Weihua, HUANG Haosheng, et al. Inferring user tasks in pedestrian navigation from eye movement data in real-world environments[J]. International Journal of Geographical Information Science, 2019, 33(4): 739-763. [9] WANG Shuihua, TIAN Yingli. Indoor signage detection based on saliency map and bipartite graph matching[C]//Proceedings of 2011 IEEE International Conference on Bioinformatics and Biomedicine Workshops(BIBMW). Atlanta, GA, USA: IEEE, 2011: 518-525. [10] DAI Yuchao, ZHANG Jing, HE Mingyi, PORIKLI Fatih, LIU Bowen. Salient Object Detection from Multi-spectral Remote Sensing Images with Deep Residual Network[J]. Journal of Geodesy and Geoinformation Science, 2019, 2(2): 101-110. [11] BROUWER W H, WATERINK W, VAN WOLFFELAAR P C, et al. Divided attention in experienced young and older drivers: lane tracking and visual analysis in a dynamic driving simulator[J]. Human Factors, 1991, 33(5): 573-582. [12] VIDAL R, RAVICHANDRAN A. Optical flow estimation & segmentation of multiple moving dynamic textures[C]//Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). San Diego, CA, USA: IEEE, 2005: 516-521. [13] JENSEN S S. Driving patterns and emissions from different types of roads[J]. Science of the Total Environment, 1995, 169(1/2/3): 123-128. [14] LIU Jingnan, ZHAN Jiao, GUO Chi, et al. Data logic structure and key technologies on intelligent high-precision map[J]. Journal of Geodesy and Geoinformation Science, 2020,3(3):1-17. [15] DOSHI A, TRIVEDI M. Investigating the relationships between gaze patterns, dynamic vehicle surround analysis, and driver intentions[C]//Proceedings of 2009 IEEE Intelligent Vehicles Symposium. Xi’an, China: IEEE, 2009: 887-892. [16] PALINKO O, KUN A L, SHYROKOV A, et al. Estimating cognitive load using remote eye tracking in a driving simulator[C]//Proceedings of 2010 Symposium on Eye-Tracking Research & Applications. Austin, Texas, NY, USA: ACM Press, 2010: 141-144. [17] RECARTE M A, NUNES L M. Effects of verbal and spatial-imagery tasks on eye fixations while driving[J]. Journal of Experimental Psychology Applied, 2000, 6(1): 31-43. [18] PALAZZI A, ABATI D, CALDERARA S, et al. Predicting the driver’s focus of attention: the DR(eye)VE project[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(7): 1720-1733. [19] 吴晓峰. 基于驾驶员工作负荷的公路线形安全性评价[D]. 西安: 长安大学, 2009. WU Xiaofeng. Highway geometry safety evaluation based on driver workload[D]. Xi’an: Changan University, 2009. [20] WERNEKE J, VOLLRATH M. What does the driver look at? The influence of intersection characteristics on attention allocation and driving behavior[J]. Accident Analysis & Prevention, 2012, 45: 610-619. [21] DENG Tao, YANG Kaifu, LI Yongjie, et al. Where does the driver look? top-down-based saliency detection in a traffic driving environment[J]. IEEE Transactions on Intelligent Transportation Systems, 2016, 17(7): 2051-2062. [22] SHASHUA A, GDALYAHU Y, HAYUN G. Pedestrian detection for driving assistance systems: single-frame classification and system level performance[C]//Proceedings of 2004 IEEE Intelligent Vehicles Symposium. Parma, Italy: IEEE, 2004: 1-6. [23] ZHONG Zhun, LEI Mingyi, CAO Donglin, et al. Class-specific object proposals re-ranking for object detection in automatic driving[J]. Neurocomputing, 2017, 242: 187-194. [24] MILANÉS V, LLORCA D F, VILLAGRÁ J, et al. Intelligent automatic overtaking system using vision for vehicle detection[J]. Expert Systems With Applications, 2012, 39(3): 3362-3373. [25] RISACK R, KLAUSMANN P, KRVGER W, et al. Robust lane recognition embedded in a real-time driver assistance system[C]//Proceedings of 1998 IEEE International Conference on Intelligent Vehicles. Piscataway, NJ, USA: IEEE,1998:35-40. [26] JEONG S G, KIM C S, LEE D Y, et al. Real-time lane detection for autonomous vehicle[C]//Proceedings of 2001 IEEE International Symposium on Industrial Electronics. Pusan, Korea (South): IEEE, 2001: 1466-1471. [27] PAULO C F, CORREIA P L. Automatic detection and classification of traffic signs[C]//Proceedings of the 8th International Workshop on Image Analysis for Multimedia Interactive Services. Santorini, Greece: IEEE, 2007: 11. [28] LOBO J M, JIMÉNEZ-VALVERDE A, REAL R. AUC: a misleading measure of the performance of predictive distribution models[J]. Global Ecology and Biogeography, 2008, 17(2): 145-151. [29] DENG Tao, YAN Hongmei, LI Yongjie. Learning to boost bottom-up fixation prediction in driving environments via random forest[J]. IEEE Transactions on Intelligent Transportation Systems, 2018, 19(9): 3059-3067. [30] PETERS R J, IYER A, ITTI L, et al. Components of bottom-up gaze allocation in natural images[J]. Vision Research, 2005, 45(18): 2397-2416. [31] HORREY W J, WICKENS C D, CONSALUS K P. Modeling drivers’ visual attention allocation while interacting with in-vehicle technologies[J]. Journal of Experimental Psychology: Applied, 2006, 12(2): 67-78. [32] TAWARI A, KANG B. A computational framework for driver’s visual attention using a fully convolutional architecture[C]//Proceedings of 2017 IEEE Intelligent Vehicles Symposium (IV). 2017, Los Angeles, CA, USA: IEEE, 2017: 887-894. [33] XIE Yuan, LIU Lifeng, LI Cuihua, et al. Unifying visual saliency with HOG feature learning for traffic sign detection[C]//Proceedings of 2009 IEEE Intelligent Vehicles Symposium. Xi’an, China: IEEE, 2009: 24-29. [34] MOHANDOSS T, PAL S, MITRA P. Visual attention for behavioral cloning in autonomous driving[C]//Proceedings of 11th International Conference on Machine Vision (ICMV 2018). Munich,Germany: SPIE, 2019: 361-371. [35] HORREY W J, WICKENS C D, CONSALUS K P. Modeling drivers’ visual attention allocation while interacting with in-vehicle technologies[J]. Journal of Experimental Psychology: Applied, 2006, 12(2): 67-78. [36] EYRAUD R, ZIBETTI E, BACCINO T. Allocation of visual attention while driving with simulated augmented reality[J]. Transportation Research Part F: Traffic Psychology and Behaviour, 2015, 32: 46-55. [37] 毛征宇,刘中坚.一种三次均匀B样条曲线的轨迹规划方法[J]. 中国机械工程, 2010, 21(21): 2569-2572,2577. MAO Zhengyu, LIU Zhongjian. A trajectory planning method for cubic uniform B-spline curve[J]. China Mechanical Engineering, 2010, 21(21): 2569-2572, 2577. [38] 李泳波. 基于RANSAC的道路消失点自适应检测算法[J]. 中国科技信息, 2017(13): 80-82, 13. LI Yongbo. An adaptive detection algorithm for road vanishing points based on RANSAC [J]. China Science and Technology Information. 2017(13):80-82,13. [39] MOGHADAM P, STARZYK J A, WIJESOMA W S. Fast vanishing-point detection in unstructured environments[J]. IEEE Transactions on Image Processing, 2012, 21(1): 425-430. [40] RASMUSSEN C. Grouping dominant orientaions for ill-structured road following[C]//Proceedings of 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, DC, USA:IEEE, 2004:I. [41] KONG Hui, AUDIBERT J Y, PONCE J. Vanishing point detection for road detection[C]//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL, USA: IEEE, 2009: 96-103. [42] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834-848. [43] CORDTS M, OMRAN M, RAMOS S, et al. The cityscapes dataset for semantic urban scene understanding[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE, 2016: 3213-3223. [44] MAULUD D, ABDULAZEEZ A M. A review on linear regression comprehensive in machine learning[J]. Journal of Applied Science and Technology Trends, 2020,1(4): 140-147. |