| Xinjiang’s cotton planting area is extensive.Filming and planting has caused serious residual film pollution in cotton fields.The operating environment for residual film recovery is severe and the workload is large.Intelligent,automation and unmanned residual film recovery are its main development directions.The visual navigation of film recycling is an important part of it.This article takes the post-autumn residual film recovery visual navigation as the research object.Proposing three different path detection algorithms for the different detection objects of the post-autumn residual film recovery.The steering control part was built and a complete set of visual navigation system for the recovery of residual film after autumn was finally formed.The feasibility and effectiveness of the system were verified through the field test.The main research contents are as follows:(1)Aiming at the problem of extracting visual navigation paths for the post-autumn segmented residual film recovery operation.The effects of different machine learning algorithms and different texture features on stubble detection are discussed.First,extract the three types of texture features of the Gray-level co-occurrence matrix,Gray level run length matrix,and local binary image of the three types of images.The three types of images include stubble,residual film,and broken leaves of stalks between rows;secondly,train the models of Support Vector Machine,BP Neural Network,and Random Forest to classify the sample images.Finally,the effect of wavelet transform texture features and the fusion texture features on stubble detection are studied.For the final detected stubble target,the characteristic points of the stubble are extracted and the navigation path is obtained by fitting the Least Square Method.(2)In view of the slower speed of machine learning algorithms in stubble detection,the YOLOv3 detection network with faster detection speed in deep learning is studied,and the improved YOLOv3 network is discussed in the navigation path detecting.Traditional YOLOv3 is not effective in small target detection.The Darknet-53 detection framework of traditional YOLOv3 is improved.The predicted anchors are re-clustered through k-means++.New anchors can matches the improved frame.The stubble data set is completed by segment labeling.For all the detected stubble targets,a mean denoising method is used to remove the false detection points,and the characteristic points are extracted from the denoised stubble.The navigation path is obtained by fitting the Least Square Method.(3)Aiming at the problem of visual navigation path extraction in the combined operation of post-autumn residual film recovery,the segmentation effect of Unet semantic segmentation model on the cotton straw row targets is discussed.First,mark the cotton stalk rows facing the tractor;secondly,complete the training and testing of the model based on the Unet network.The trained network can effectively detect the cotton stalk areas in the image;finally,all the straw row areas that have been successfully segmented are performed.The area of the connected domain is denoised,and the denoised tractor is fitted to the contour points of the straw row area to obtain the final navigation line.(4)For the tractor selected for the experiment,the steering actuator and sensor installation parts were drawn through 3D software.All parts were obtained through 3D printing and all parts were successfully assembled on the tractor.The fuzzy PID control algorithm used is simulated through MATLAB,the control algorithm is added to the Arduino controller and communicates with the computers’ path detection part.The field test of the navigation system was carried out in the factory test field of the North Campus of Shihezi University.The test selected three path detection algorithms: BP neural network,improved YOLOv3 and Unet.It is verified by experiments that when the tractor is in a slower gear and in a scene with good light,the tractor under the three path detection algorithms can realize autonomous navigation and driving,which proves that the visual navigation system proposed in this article is feasible. |