Acta Geodaetica et Cartographica Sinica ›› 2020, Vol. 49 ›› Issue (12): 1630-1639.doi: 10.11947/j.AGCS.2020.20190516

• Cartography and Geoinformation • Previous Articles     Next Articles

Deep reinforcement learning based electric taxi service optimization

YE Haoyu1, TU Wei2,3,4,5, YE Hehui6, MAI Ke2,3,4, ZHAO Tianhong2,4, LI Qingquan1,2,3,4   

  1. 1. State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan 430079, China;
    2. Department of Urban Informatics, School of Architecture and Urban Planning, Shenzhen University, Shenzhen 518060, China;
    3. Guangdong Laboratory of Artificial Intelligence and Digital Economy(SZ), Shenzhen University, Shenzhen 518060, China;
    4. Guangdong Key Laboratory of Urban Informatics, Shenzhen University, Shenzhen 518060, China;
    5. Key Laboratory for Geo-Environmental Monitoring of Great Bay Area, MNR, Shenzhen 518060, China;
    6. Software College, Minjiang University, Fuzhou 350108, China
  • Received:2019-12-16 Revised:2020-06-07 Published:2020-12-25
  • Supported by:
    The National Key Research and Development Program of China (No. 2019YFB2103104);The Natural Science Foundation of Guangdong Province(No. 2019A1515011049);The Basic Research Projects of Shenzhen Technology Innovation Commission (No. JCYJ20170412105839839)

Abstract: Electric taxis have been demonstrated with the promotion of electric vehicles. Compared with internal combustion engine vehicles, electric taxis spend more time in recharging, which reduces the taxi drivers’ intention to use. Reinforcement learning is applicable to the sequential decision-making process of taxis driver. This paper presents the double deep Q-learning network (DDQN) model to simulate the operation of electric taxis. According to the real-time state of taxis, DDQN will choose the optimal actions to execute. After training, we obtain a global optimal electric taxi service strategy, and finally optimize the taxi service. Using real-world taxi travel data, an experiment is conducted in Manhattan Island in New York City, USA. Results show that, comparing with the baseline methods, DDQN reduces the waiting time for charging and the rejection rate by 70% and 53%, respectively. Taxi drives’ income are finally increased by about 7%. Moreover, the results of model parameter sensitivity analysis indicate that the charge speed and the number of vehicles have greater impact on drives’ income than the battery capacity. When the charging rate reaches 120 kW, electric taxis achieve the best performance. The government should build more fast charging station to improve the revenue of electric taxis.

Key words: deep reinforcement learning, electric taxi, DDQN, taxi service strategies

CLC Number: