With the development of technology,the concept of autonomous driving gets into the practical stage from experimental scientific research.The privilege of autonomous driving is shown in many aspects,including reduction in traffic accident casualties,decline in traffic congestion and so on.However autonomous driving involves a wide range of disciplines,so the perception of environment is concentrated on in this paper.The schemes of perception of environment of autonomous driving available now always involve many sorts of sensor devices,but in this paper a single front camera is used as sensor device to achieve perception of environment in order to reduce the cost of hardware and reinforce the capability of processing images to improve the performance of autonomous driving.Thereby,the design and implementation of visual perception system of autonomous driving vehicle based on deep learning is accomplished in this paper,fulfilling autonomous driving tasks both in physical environment and virtual simulation.The main work of the paper is as follows:First and foremost,Jetson Nano development kit tools were utilized to build the experiment vehicle,and the visual perception system of autonomous driving was built on ROS(Robot Operating System)frame.The system adopts single front camera as means to environment perception,using Yolo-v4 network to detect and recognize traffic signs automatically,and using Resnet-18 network to forecast the middle line of traffic lane,both results of which are used to generate vehicle control commands by PID(Proportion Integration Differentiation)control unit.In the duration of training network,K-means clustering algorithm was used to adjust the size of anchor box in Yolo-v4 network,together with drop block algorithm to accelerate the rate of convergency of network.Also,a Pascal voc data set of 10 thousand images of traffic signs was made up.Finally,the system achieved lane keeping and traffic signs detection automatically,making the experiment vehicle recognize traffic signs and drive through the experiment route all by itself.Secondly,an algorithm based on DRL(Deep Reinforcement Learning)to lane keeping was proposed,and the function was achieved in CARLA virtual simulation environment in which the experiment route was consisted of straight lane,c-shaped bend and elbow bend.The algorithm adopted DDQN(Double Deep Q-Learning)in order to achieve self-training of vehicles.The image of single front camera was input into neural network to get the estimation of action value of each action command.In this process,the design of environment of reinforcement learning was accomplished,together with the comparison of performance between DDQN algorithm and DQN(Deep Q-Learning)algorithm,with the result of experiments shows the DDQN algorithm solving the problem of over-estimating of DQN and possessing faster rate of network training as well.Last but not least,an algorithm based on DRL to automatic parking was proposed,and the function was achieved in CARLA virtual simulation environment aimed at a known parking space.The algorithm adopted AC(Actor Critic)in order to achieve self-training of vehicles.The image of single front camera was input into neural network to get the action decision policy.In this process,the design of environment of reinforcement learning was accomplished,together with the comparison of performance between AC algorithm and DQN algorithm,with the result of experiments shows the AC algorithm of faster rate of network training. |