In recent years,there have been many breakthrough research results in the field of artificial intelligence,so the technology of autonomous driving has been developed rapidly.In the related technologies of autonomous driving,perception technology is the key technology.Only by obtaining environmental perception information,can the subsequent decision-making and planning of autonomous vehicle be reasonable and reliable.In the environmental perception information,the information of object on the road is very important.If we master these two kinds of information,we can plan a reliable and safe route for the autonomous vehicle.The traditional extraction algorithm of drivable area mainly extracts the road boundary through RGB camera and image processing algorithm.It could run with fast processing speed,the update frequency of algorithm results can be close to the frame rate of image capture,but it has several disadvantages: RGB camera is easy to be affected by ambient light.In bad weather,night and other circumstances,the performance of algorithm based on camera declines sharply,even can't work at all;the field of vision of RGB camera is very limited,which leads to the limited drivable areas that can be extracted by the system;not all the drivable areas have clear and easy to extract road boundaries,so it's easy to miss and misidentify the drivable areas only based on the road boundary information.Therefore,the latest research tends to use multiline lidar as the main sensor,using the three-dimensional environmental information captured by lidar to extract the drivable area in the surrounding environment through the characteristics of height,height gradient,surface smoothness and so on.This method can avoid the problems of traditional methods,such as the influence of ambient light,limited field of vision,easy to miss and misrecognize,etc.,but at present,there is still a problem that the accuracy of extracting the drivable area is low,and it is difficult to provide a stable,reliable and comprehensive drivable area for autonomous vehicle.The traditional object segmentation methods mainly use various geometric features,such as HOG feature,SIFT feature and hand-made design feature,to segment the front point and background point from the two-dimensional image,and then segment the concerned object.The advantages of these methods are fast processing speed and transparent processing,but because the sensor based on is RGB camera,there are still problems such as affected by ambient light,limited field of vision and so on.In addition,this kind of method is difficult to achieve high accuracy in general environment.Therefore,most of the latest researches are based on the 3D point cloud data provided by the multiline lidar,using deep learning algorithm to complete the object recognition and segmentation in the environment by training deep neural network models with high precision and strong generalization ability.At present,the state-of-art methods can achieve high accuracy,but the problem is that the calculation is too large,resulting in the overall speed is too slow.It is difficult to apply these methods to autonomous driving system which requires real-time.Based on the project of robot research group of Jilin University,the research contents and achievements are as follows:(1)Aiming at the problem of low extraction accuracy or slow speed of the current driving area extraction algorithm,the paper proposes the driving area extraction algorithm with both precision and speed.This paper proposes a new range region extraction algorithm,by using the beam method and the artificial potential field method,respectively,after the 3D data and projection can sail area is extracted on the 2D data of related information,then the results,to get more accurate,is divided into two security levels can drive area,and the algorithm runs fast.(2)Aiming at the problem of too much calculation and poor real-time performance of current object segmentation method,this paper improves the latest three-dimensional object segmentation algorithm with the aid of feature extraction and classification optimization method in twodimensional deep learning object detection algorithm,so as to improve the running speed and realtime performance of the object segmentation algorithm while ensuring that its accuracy will not drop too much Promote.(3)Using the intelligent car and sensors provided by JLU Robot,the real environment data collection in the university is completed,the self-developed semi-automatic data annotation software is used to label the environment data,and a new small data set is established,and the performance of the algorithm in this paper is verified by comparing the small data set. |