测绘学报 ›› 2022, Vol. 51 ›› Issue (6): 811-828.doi: 10.11947/j.AGCS.2022.20220152
刘经南1,2, 罗亚荣1, 郭迟1,2, 高柯夫1,2
收稿日期:
2022-02-28
修回日期:
2022-04-11
出版日期:
2022-06-20
发布日期:
2022-07-02
作者简介:
刘经南(1943-),男,教授,中国工程院院士,主要研究方向为卫星大地测量方法与数据处理,卫星导航方法、数据处理与应用。E-mail:jnliu@whu.edu.cn
基金资助:
LIU Jingnan1,2, LUO Yarong1, GUO Chi1,2, GAO Kefu1,2
Received:
2022-02-28
Revised:
2022-04-11
Online:
2022-06-20
Published:
2022-07-02
Supported by:
摘要: 定位(positioning)、导航(navigation)、授时(timing),简称PNT,是人类在长期感知、认知宇宙与人类生存的关系后,产生的与经济、社会活动密切相关的时空位置概念。PNT也是地球上的物质、能量和信息经过亿万年进化出的感知、认知与时空位置相关的智能,称为PNT智能或时空智能。而智能是生命体为适应环境生存,通过一代代继承、演进而进化出来的趋利避害行为能力的总和,可称为自然智能。本文分析了依托于生命体物质基础的自然智能特性,在此基础上探讨了生命体PNT智能的特点;概述了人类研究生命体PNT智能的最新成果、趋势和启示。其中,交互智能和PNT智能等在生命体由低级向高级的进化过程中,特别是在智人进化并形成人类文明的过程中,起到了关键性的推动作用。随着自然智能、PNT智能、人工智能研究的深入推进和交叉融合,PNT技术发展也进入智能PNT阶段。本文着重阐述了智能PNT的概念、内涵和发展趋势,从PNT应用和时空信息基础设施建设两方面,探讨了其智能化热点方向的最新进展和影响。在此基础上,也提出了一些智能PNT发展的思考和展望。
中图分类号:
刘经南, 罗亚荣, 郭迟, 高柯夫. PNT智能与智能PNT[J]. 测绘学报, 2022, 51(6): 811-828.
LIU Jingnan, LUO Yarong, GUO Chi, GAO Kefu. PNT intelligence and intelligent PNT[J]. Acta Geodaetica et Cartographica Sinica, 2022, 51(6): 811-828.
[1] HAWKING S. A brief history of time:from big bang to black holes[M]. New York:Random House, 2009. [2] PENROSE R. The road to reality:a complete guide to the laws of the universe[M]. New York:Random House, 2005. [3] CRACRAFT J, DONOGHUE M J, DONOGHUE M M, et al. Assembling the tree of life[M]. New York:Oxford University Press, 2004. [4] PULIDO C, RYAN T A. Synaptic vesicle pools are a major hidden resting metabolic burden of nerve terminals[J]. Science Advances, 2021, 7(49):eabi9027. [5] SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature,2016, 529(7587):484-489. [6] GAGLIANO M, GRIMONPREZ M, DEPCZYNSKI M, et al. Tuned in:plant roots use sound to locate water[J]. Oecologia, 2017, 184(1):151-160. [7] HEIL M. Nightshade wound secretion:the world's simplest extrafloral nectar?[J]. Trends in Plant Science, 2016, 21(8):637-638. [8] BALUSKA F, MANCUSO S, VOLKMANN D, BARLOW PW. The 'root-brain' hypothesis of Charles and Francis Darwin:revival after more than 125 years[J]. Plant Signaling & Behavior, 2009, 4(12):1121-1127. [9] MOTTE H, VANNESTE S, BEECKMAN T. Molecular and environmental regulation of root development[J]. Annual Review of Plant Biology, 2019, 70(1):465-488. [10] LECUN Y. 1.1 deep learning hardware:past, present, and future[C]//Proceedings of 2019 IEEE International Solid-State Circuits Conference-(ISSCC). San Francisco, CA:IEEE, 2019:12-19. [11] GUPTA A, SAVARESE S, GANGULI S, et al. Embodied intelligence via learning and evolution[J]. Nature Communications, 2021, 12(1):5721. [12] GORDON J, MASELLI A, LANCIA G L, et al. The road towards understanding embodied decisions[J]. Neuroscience & Biobehavioral Reviews, 2021, 131:722-736. [13] GEORGE E A, BROCKMANN A. Social modulation of individual differences in dance communication in honey bees[J]. Behavioral Ecology and Sociobiology, 2019, 73(4):1-14. [14] O'KEEFE J, DOSTROVSKY J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat[J]. Brain Research, 1971, 34(1):171-175. [15] HAFTING T, FYHN M, MOLDEN S, et al. Microstructure of a spatial map in the entorhinal cortex[J]. Nature, 2005, 436(7052):801-806. [16] MACDONALD C J, LEPAGE K Q, EDEN U T, et al. Hippocampal "time cells" bridge the gap in memory for discontiguous events[J]. Neuron, 2011, 71(4):737-749. [17] MCCARTHY J. Mathematical logic in artificial intelligence[J]. Daedalus, 1988, 117(1):297-311. [18] OELSCHLÄGER H H A. The dolphin brain-a challenge for synthetic neurobiology[J]. Brain Research Bulletin, 2008, 75(2-4):450-459. [19] WILTSCHKO W, WILTSCHKO R. Homing pigeons as a model for avian navigation?[J]. Journal of Avian Biology, 2017, 48(1):66-74. [20] ZHANG Yigui, YU Cong, CHEN Lixin, et al. Performance of Azure-winged magpies in Aesop's fable paradigm[J]. Scientific Reports, 2021, 11(1):804. DOI:10.1038/s41598-020-80452-5. [21] SEELIG J D, JAYARAMAN V. Neural dynamics for landmark orientation and angular path integration[J]. Nature, 2015, 521(7551):186-191. [22] SHIOZAKI H M, KAZAMA H. Parallel encoding of recent visual experience and self-motion during navigation in Drosophila[J]. Nature Neuroscience, 2017, 20(10):1395-1403. [23] FISHER Y E, LU J, D'ALESSANDRO I, et al. Sensorimotor experience remaps visual input to a heading-direction network[J]. Nature, 2019, 576(7785):121-125. [24] KIM S S, HERMUNDSTAD A M, ROMANI S, et al. Generation of stable heading representations in diverse visual scenes[J]. Nature, 2019, 576(7785):126-131. [25] RANCK J L, LETELLIER L, SHECHTER E, et al. X-ray analysis of the kinetics of Escherichia coli lipid and membrane structural transitions[J]. Biochemistry, 1984, 23(21):4955-4961. [26] JACOBSJ, WEIDEMANN C T, MILLER J F,et al. Direct recordings of grid-like neuronal activity in human spatial navigation[J]. Nature Neuroscience, 2013, 16(9):1188-1190. [27] KRAUS B J, ROBINSON II R J, WHITE J A, et al. Hippocampal "time cells":time versus path integration[J]. Neuron, 2013, 78(6):1090-1101. [28] WANG Yingxue, ROMANI S, LUSTIGB, et al. Theta sequences are essential for internally generated hippocampal firing fields[J]. Nature Neuroscience, 2015, 18(2):282-288. [29] BANINO A, BARRY C, URIA B, et al. Vector-based navigation using grid-like representations in artificial agents[J]. Nature, 2018, 557(7705):429-433. [30] 郭迟, 罗宾汉, 李飞, 等. 类脑导航算法:综述与验证[J]. 武汉大学学报(信息科学版), 2021, 46(12):1819-1831. GUO Chi, LUO Binhan, LI Fei, et al. Review and verification for brain-like navigation algorithm[J].Geomatics and Information Science of Wuhan University, 2021, 46(12):1819-1831. [31] GARM A, NILSSON D E. Visual navigation in starfish:first evidence for the use of vision and eyes in starfish[J]. Proceedings of the Royal Society B:Biological Sciences, 2014, 281(1777):20133011. [32] PAPI F. Pigeon navigation:solved problems and open questions[J]. Monitore Zoologico Italiano-Italian Journal of Zoology, 1986, 20(4):471-517. [33] GUILFORD T, BIRO D. Route following and the pigeon's familiar area map[J]. Journal of Experimental Biology, 2014, 217(2):169-179. [34] COLLETT M, COLLETT T S, BISCH S, et al. Local and global vectors in desert ant navigation[J]. Nature, 1998, 394(6690):269-272. [35] ROSSEL S, WEHNER R. How bees analyse the polarization patterns in the sky[J]. Journal of Comparative Physiology A, 1984, 154(5):607-615. [36] REPPERT S M, ZHU Haisun, WHITE R H. Polarized light helps monarch butterflies navigate[J]. Current Biology, 2004, 14(2):155-158. [37] 胡小平, 毛军, 范晨, 等. 仿生导航技术综述[J]. 导航定位与授时, 2020, 7(4):1-10. DOI:10.19306/j.cnki.2095-8110.2020.04.001. HU Xiaoping, MAO Jun, FAN Chen, et al. Bionic navigation technology:a survey[J].Navigation Positioning and Timing, 2020, 7(4):1-10. DOI:10.19306/j.cnki.2095-8110.2020.04.001. [38] DUPEYROUX J, VIOLLET S, SERRES J R. An ant-inspired celestial compass applied to autonomous outdoor robot navigation[J]. Robotics and Autonomous Systems, 2019, 117:40-56. [39] DUPEYROUX J, SERRES J R, VIOLLET S. AntBot:a six-legged walking robot able to home like desert ants in outdoor environments[J]. Science Robotics, 2019, 4(27):eaau0307. [40] LIAO Fuyou, ZHOU Zheng, KIM B J, et al. Bioinspired in-sensor visual adaptation for accurate perception[J]. Nature Electronics, 2022, 5(2):84-91. [41] ZHANG Ming, ZHANG Mingming, CHEN Yiming, et al. IMU data processing for inertial aided navigation:a recurrent neural network based approach[C]//Proceedings of 2021 IEEE International Conference on Robotics and Automation (ICRA). Xi'an:IEEE, 2021:3992-3998. [42] HERATH S, YAN Hang, FURUKAWA Y. Ronin:robust neural inertial navigation in the wild:benchmark, evaluations, & new methods[C]//Proceedings of 2020 IEEE International Conference on Robotics and Automation (ICRA). Paris:IEEE, 2020:3146-3152. [43] ZHOU Yao, WAN Guowei, HOU Shenhua, et al. DA4AD:end-to-end deep attention-based visual localization for autonomous driving[C]//Proceedings of the 16th European Conference on Computer Vision. Glasgow:Springer, 2020:271-289. [44] WANG Sen, CLARK R, WEN Hongkai, et al. End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks[J]. The International Journal of Robotics Research, 2018, 37(4-5):513-542. [45] YIN Zhichao, SHI Jianping. Geonet:unsupervised learning of dense depth, optical flow and camera pose[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT:IEEE, 2018:1983-1992. [46] CLARK R, WANG Sen, WEN Hongkai, et al. Vinet:Visual-inertial odometry as a sequence-to-sequence learning problem[C]//Proceedings of the 31st AAAI Conference on Artificial Intelligence. Infosys:AAAI, 2017, 31(1):3995-4001. [47] MORIMOTO J, DOYA K. Reinforcement learning state estimator[J]. Neural Computation, 2007, 19(3):730-756. [48] HU Liang, TANG Yujie, ZHOU Zhipeng, et al. Reinforcement learning for orientation estimation using inertial sensors with performance guarantee[C]//Proceedings of 2021 IEEE International Conference on Robotics and Automation (ICRA). Xi'an:IEEE, 2021:10243-10249. [49] YANG Shichao, SCHERER S. CubeSLAM:Monocular 3-D object SLAM[J]. IEEE Transactions on Robotics, 2019, 35(4):925-938. [50] NICHOLSON L, MILFORD M, SVNDERHAUF N. Quadric SLAM:dual quadrics from object detections as landmarks in object-oriented SLAM[J]. IEEE Robotics and Automation Letters, 2019, 4(1):1-8. [51] WORTSMAN M, EHSANI K, RASTEGARI M, et al. Learning to learn how to learn:self-adaptive visual navigation using meta-learning[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA,USA:IEEE, 2020. [52] SAVVA M, CHANG A X, DOSOVITSKIY A, et al. MINOS:multimodal indoor simulator for navigation in complex environments[EB/OL]. (2017-12-11)[2022-02-07].https://arxiv.org/abs/1712.03931. [53] WU Yi, WU Yuxin, GKIOXARI G, et al. Building generalizable agents with a realistic and rich 3D environment[J]. (2018-04-08)[2022-02-15]. https://arxiv.org/abs/1801.02209. [54] ZHU Yuke, MOTTAGHI R, KOLVE E, et al. Target-driven visual navigation in indoor scenes using deep reinforcement learning[C]//Proceedings of 2017 IEEE International Conference on Robotics and Automation (ICRA). Singapore:IEEE, 2017:3357-3364. [55] THOMASON J, MURRAY M, CAKMAK M, et al. Vision-and-dialog navigation[C]//Proceedings of 2020 Conference on Robot Learning.[S.l.]:PMLR, 2020:394-406. [56] ZHU Yi, ZHU Fengda, ZHAN Zhaohuan, et al. Vision-dialog navigation by exploring cross-modal memory[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA:IEEE, 2020:10730-10739. [57] ZHU Yi, WENG Yue, ZHU Fengda, et al. Self-motivated communication agent for real-world vision-dialog navigation[C]//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal:IEEE, 2021. [58] ANDERSON P, WU Qi, TENEY D, et al. Vision-and-language navigation:interpreting visually-grounded navigation instructions in real environments[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT:IEEE, 2018:3674-3683. [59] NGUYEN K, DEY D, BROCKETT C, et al. Vision-based navigation with language-based assistance via imitation learning with indirect intervention[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, CA,USA:IEEE, 2019:12527-12537. [60] WANG Xin, HUANG Qiuyuan, CELIKYILMAZ A, et al. Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, CA,USA:IEEE, 2019:6629-6638. [61] KRISHNA R, ZHU Yuke, GROTH O, et al. Visual genome:connecting language and vision using crowdsourced dense image annotations[J]. International Journal of Computer Vision, 2017, 123(1):32-73. [62] DRUON R, YOSHIYASU Y, KANEZAKI A, et al. Visual object search by learning spatial context[J]. IEEE Robotics and Automation Letters, 2020, 5(2):1279-1286. [63] MOGHADDAM M K, WU Qi, ABBASNEJAD E, et al. Optimistic agent:accurate graph-based value estimation for more successful visual navigation[C]//Proceedings of 2021 IEEE/CVF Winter Conference on Applications of Computer Vision. Waikoloa, HI,USA:IEEE, 2021:3733-3742. [64] MÖLLER R, FURNARI A, BATTIATO S, et al. A survey on human-aware robot navigation[J]. Robotics and Autonomous Systems, 2021, 145:103837. [65] CHEN Changan, LIU Yuejiang, KREISS S, et al. Crowd-robot interaction:crowd-aware robot navigation with attention-based deep reinforcement learning[C]//Proceedings of 2019 International Conference on Robotics and Automation (ICRA). Montreal:IEEE, 2019:6015-6022. [66] GULDENRING R, GÖRNER M, HENDRICH N, et al. Learning local planners for human-aware navigation in indoor environments[C]//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas, NV,USA:IEEE, 2020:6053-6060. [67] BACHILLER P, RODRIGUEZ-CRIADO D, JORVEKAR R R, et al. A graph neural network to model disruption in human-aware robot navigation[J]. Multimedia Tools and Applications, 2022, 81(3):3277-3295. [68] DEVLIN S, GEORGESCU R, MOMENNEJAD I, et al. Navigation Turing test (NTT):learning to evaluate human-like navigation[C]//Proceedings of 2021 International Conference on Machine Learning.[S.l.]:PMLR, 2021:2644-2653. [69] 陈健瑞, 王景璟, 侯向往, 等. 挺进深蓝:从单体仿生到群体智能[J]. 电子学报, 2021, 49(12):2458-2467. CHEN Jianrui, WANG Jingjing, HOU Xiangwang, et al. Advance into ocean:from bionic monomer to swarm intelligence[J]. Acta Electronica Sinica, 2021, 49(12):2458-2467. [70] MARCHESINI E, FARINELLI A. Genetic deep reinforcement learning for mapless navigation[C]//Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems. Auckland:ACM, 2020:1919-1921. [71] 陈龙, 刘坤华, 周宝定, 等. 多智能体协同高精地图构建关键技术研究[J]. 测绘学报, 2021, 50(11):1447-1456. DOI:10.11947/j.AGCS.2021.20210259. CHEN Long, LIU Kunhua, ZHOU Baoding, et al. Key technologies of multi-agent collaborative high definition map construction[J]. Acta Geodaetica et Cartographica Sinica, 2021, 50(11):1447-1456. DOI:10.11947/j.AGCS.2021.20210259. [72] LIU Zuxin, CHEN Baiming, ZHOU Hongyi, et al. Mapper:multi-agent path planning with evolutionary reinforcement learning in mixed dynamic environments[C]//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas, NV,USA:IEEE, 2020:11748-11754. [73] TANG Weiming, LI Yangyang, DENG Chenlong, et al. Stability analysis of position datum for real-time GPS/BDS/INS positioning in a platform system with multiple moving devices[J]. Remote Sensing, 2021, 13(23):4764. [74] GUO Wenfei, SONG Weiwei, NIU Xiaoji, et al. Foundation and performance evaluation of real-time GNSS high-precision one-way timing system[J].GPS Solutions, 2019, 23(1):1-11. [75] 施闯, 张东, 宋伟, 等. 北斗广域高精度时间服务原型系统[J]. 测绘学报, 2020, 49(3):269-277. DOI:10.11947/j.AGCS.2020.20180534. SHI Chuang, ZHANG Dong, SONG Wei, et al. BeiDou wide-area precise timing prototype system[J]. Acta Geodaetica et Cartographica Sinica, 2020, 49(3):269-277. DOI:10.11947/j.AGCS.2020.20180534. [76] LI Qingbiao, GAMA F, RIBEIRO A, et al. Graph neural networks for decentralized multi-robot path planning[C]//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas, NV,USA:IEEE, 2020:11785-11792. [77] MAVROGIANNIS C, KNEPPER R A. Hamiltonian coordination primitives for decentralized multiagent navigation[J]. The International Journal of Robotics Research, 2021, 40(10-11):1234-1254. [78] 杨元喜. 综合PNT体系及其关键技术[J]. 测绘学报, 2016, 45(5):505-510. DOI:10.11947/j.AGCS.2016.20160127. YANG Yuanxi. Concepts of comprehensive PNT and related key technologies[J]. Acta Geodaetica et Cartographica Sinica, 2016, 45(5):505-510. DOI:10.11947/j.AGCS.2016.20160127. [79] 杨元喜. 弹性PNT基本框架[J]. 测绘学报, 2018, 47(7):893-898. DOI:10.11947/j.AGCS.2018.20180149. YANG Yuanxi. Resilient PNT concept frame[J]. Acta Geodaetica et Cartographica Sinica, 2018, 47(7):893-898. DOI:10.11947/j.AGCS.2018.20180149. [80] STRAY B, LAMB A, KAUSHIK A, et al. Quantum sensing for gravity cartography[J]. Nature,2022, 602(7898):590-594. DOI:10.1038/s41586-021-04315-3. [81] 褚金奎, 张然, 王志文, 等. 仿生偏振光导航传感器研究进展[J]. 科学通报, 2016, 61(23):2568-2577. DOI:10.1360/N972015-01163. CHU Jinkui, ZHANG Ran, WANG Zhiwen,et al. Progress on bio-inspired polarized skylight navigation sensor[J].Chinese Science Bulletin, 2016, 61(23):2568-2577. DOI:10.1360/N972015-01163. [82] HAMBLING D. Cosmic rays used for Arctic GPS[J]. New Scientist, 2021, 252(3364):8. |
[1] | 杨必胜, 陈驰, 董震. 面向智能化测绘的城市地物三维提取[J]. 测绘学报, 2022, 51(7): 1476-1484. |
[2] | 王家耀, 武芳, 闫浩文. 大变化时代的地图学[J]. 测绘学报, 2022, 51(6): 829-842. |
[3] | 龚健雅, 宦麟茜, 郑先伟. 影像解译中的深度学习可解释性分析方法[J]. 测绘学报, 2022, 51(6): 873-884. |
[4] | 刘瑜, 郭浩, 李海峰, 董卫华, 裴韬. 从地理规律到地理空间人工智能[J]. 测绘学报, 2022, 51(6): 1062-1069. |
[5] | 王权, 尤淑撑. 陆地卫星遥感监测体系及应用前景[J]. 测绘学报, 2022, 51(4): 534-543. |
[6] | 李德仁, 徐小迪, 邵振峰. 论万物互联时代的地球空间信息学[J]. 测绘学报, 2022, 51(1): 1-8. |
[7] | 张永生, 张振超, 童晓冲, 纪松, 于英, 赖广陵. 地理空间智能研究进展和面临的若干挑战[J]. 测绘学报, 2021, 50(9): 1137-1146. |
[8] | 艾廷华. 深度学习赋能地图制图的若干思考[J]. 测绘学报, 2021, 50(9): 1170-1182. |
[9] | 慎利, 徐柱, 李志林, 刘万增, 崔秉良. 从地理信息服务到地理知识服务:基本问题与发展路径[J]. 测绘学报, 2021, 50(9): 1194-1202. |
[10] | 陈军, 刘万增, 武昊, LI Songnian, 闫利. 智能化测绘的基本问题与发展方向[J]. 测绘学报, 2021, 50(8): 995-1005. |
[11] | 张继贤, 李海涛, 顾海燕, 张鹤, 杨懿, 谭相瑞, 李淼, 沈晶. 人机协同的自然资源要素智能提取方法[J]. 测绘学报, 2021, 50(8): 1023-1032. |
[12] | 史文中, 张敏. 人工智能用于遥感目标可靠性识别:总体框架设计、现状分析及展望[J]. 测绘学报, 2021, 50(8): 1049-1058. |
[13] | 杨必胜, 韩旭, 董震. 适用于城市场景大规模点云语义标识的深度学习网络[J]. 测绘学报, 2021, 50(8): 1059-1067. |
[14] | 张永军, 万一, 史文中, 张祖勋, 李彦胜, 季顺平, 郭浩宇, 李礼. 多源卫星影像的摄影测量遥感智能处理技术框架与初步实践[J]. 测绘学报, 2021, 50(8): 1068-1083. |
[15] | 张广运, 张荣庭, 戴琼海, 陈军, 潘云鹤. 测绘地理信息与人工智能2.0融合发展的方向[J]. 测绘学报, 2021, 50(8): 1096-1108. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||