| The remote driving vehicle terminal moves rapidly in a multi-wireless network coverage environment and obtains uninterrupted network services through heterogeneous network convergence and cooperation technology.For the existing heterogeneous network decision algorithm cannot meet the fast moving characteristics of the vehicle terminal and the demand of high traffic data services,this thesis proposes a multi-service type Fuzzy Analytic Hierarchy Process(FAHP)network selection algorithm based on high traffic data and a speed adaptive vertical switching algorithm based on Improved Fuzzy Analytic Hierarchy Process-Double Deep Q-Network(IFAHP-DDQN)to ensure the wireless network connection and high quality communication of the remote driving vehicle terminal in the process of high-speed movement.A multi-service type FAHP network selection algorithm based on high traffic data is proposed for the diversity of remote driving vehicle terminal application types,network service demands and data traffic demands.The algorithm improves on the multi-attribute decision algorithm by firstly pre-screening the candidate networks to screen out those that do not meet the attribute threshold;secondly calculating the multi-service type network attribute weights by fuzzy hierarchical analysis;and finally constructing a reward function to rank the candidate networks by the SAW algorithm to decide the best access network and make the switch.Through simulation,the algorithm can obtain better network performance.For the multi-service type FAHP algorithm does not consider the problem of decision gain and the fast moving characteristics of remote driving vehicle terminal,a speed adaptive vertical switching algorithm based on IFAHP-DDQN combining fuzzy hierarchical analysis method and deep reinforcement learning algorithm is proposed.Firstly,the algorithm uses the speed measurement function of the mobile vehicle terminal to obtain the vehicle movement speed and calculate the update time of the adaptive candidate network set,which improves the network discovery timing for high-speed mobile users and reduces the "ping-pong effect" caused by untimely network response and unnecessary switching;secondly,the Double Deep Q-Network(DDQN)deep reinforcement learning algorithm is introduced,using deep Secondly,the DDQN deep reinforcement learning algorithm is introduced to fit the action value function using a deep neural network,and decouples the two value functions for action selection and value evaluation respectively,in pursuit of maximizing the cumulative discount reward in the network decision process;finally,through simulation verification,the algorithm further reduces the switching failure probability and new user call blocking rate,improves throughput,and reduces the number of switching. |