| In recent years,the civil aviation transportation volume of passengers,cargoes and mails is increasing.The rising of transportation pressure makes the management of the airport a challenging task.In the surface operation,the aircraft consume the longest time during taxing through the taxiway system.Therefore,it is necessary to study on the taxiway system to increase the airport capacity and decrease the pressure of airport contorller.Considering the security and efficiency requirements,each airport is designed with a specific taxiway structure.Nevertheless,the frequently used taxiway crossings are always the hot spots of the airports.When there is a risk of conflict at the intersection,the aircraft is supposed to stop and wait to maintain the minimum safe interval.The controller has to spend a lot of time to monitor the aircraft,notify the traffic and issue instructions,which reduces the traffic flow.The rise of artificial intelligence provides a new idea for intelligent management of taxiway crossings.Deep Q-learning network algorithm showed advantage in dealing with nonlinear dynamic programming problems in recent researches.Inspired by these works,this article models the hot spots control of an airport as an intelligent decision-making problem which can be solved by deep Q-network.The agent was trained for the aircraft taxiing speed control.It is supposed to keep the aircrafts at a safe distance between each other while the traffic flow can be efficient at the intersection zone.Based on Mianyang Nanjiao Airport taxiway system model,this paper simulates the taxiway on circulating,and tests the method for speed controlling of the single crossing and of the coordination control of two similar crossings.Experiments was carried out at different pressure levels of aircraft taxiing.It shows that DQN network trained for the single crossing traffic can reach the success rate of 99.48%.Compared with the random control and the non-control strategies,the best optimization effect is 98.68%.On the other hand,the two crossings are co-controlled,the success rate reaches 99.58%,and the best optimization effect is 99.13% compared with random control and the non-control strategies. |