| Agricultural spray robot is an important part of intelligent plant protection.Compared with artificial spraying and traditional plant protection mechanical spraying,spraying robot can effectively improve the efficiency of pesticide spraying,significantly reduce the labor cost,protect the ecological environment and the safety of operators.In order to achieve the above purposes,it is required that the spraying robot can detect orchard scene targets in real time and distinguish targets from non-targets,so as to spray only on targets.In this paper,in order to achieve autonomous spraying of target variable for orchard spraying robot,the deep learning method is used to segment the orchard scene image,a target variable spray system is designed for segmented fruit tree targets.The main research contents are as follows:(1)A visual perception spray system was designed based on orchard spray robot platform.The color information and depth information of orchard scene targets were obtained in real time by using visual sensors.The remote computer was used to control the on-board embedded computer to process the real-time target information to complete the semantic segmentation of orchard scene,and sent the spray instructions required by the spray robot to the main controller,so as to realize the spray operation control of the orchard spray robot.(2)Aiming at the problem that the original DeepLab V3+ semantic segmentation algorithm is difficult to deploy on the orchard spray robot due to its large number of parameters,large computation and information loss.An improved DeepLab V3+ semantic segmentation algorithm was proposed.The lightweight MobileNet V2 network was selected as the backbone network of DeepLab V3+semantic segmentation model,which improved the operation speed of the model while reduced the amount of parameters;The ASPP module was improved to extract the features of different size targets and improved the robustness of the model.The improved deeplab V3 + model was trained and verified by using the self-made orchard scene dataset The experimental results showed that the mean pixel accuracy and mean intersection over union of the improved DeepLab V3+semantic segmentation model reach 62.81% and 56.64% respectively,which were5.52 and 8.75 percentages higher than before improvement respectively.In particular,the segmentation accuracy of fruit trees reached 95.61%,which was 1.31 percentages higher than the original model.The segmentation time of a single image was 0.08 s,which was 0.09 s faster than the original model.(3)A target variable spray system based on real-time segmentation of orchard scene was designed.According to the spatial position layout of the four nozzles on one side of the spray robot,the orchard scene segmentation results were divided into grids by matching.The percentage of fruit tree pixels that meet the conditions in the grid was set as the switch duty of the spray solenoid valve,so as to realize the independent variable spray of the four nozzles on one side of the spray robot.Combined with the depth information of the visual sensor,the fruit tree targets meeting the requirements were sprayed.The indoor and outdoor simulated scene experiments with simulated trees as targets,pedestrians and tripods as non-targets,and the field experiments between forest lanes with trees between forest lanes as targets,pedestrians and lamp poles as non-targets were designed respectively.The simulation scene and field experiment results showed that the droplet coverage of targets was high,and the droplet coverage of non-targets was very low,which verified the effectiveness and adaptability of the target variable spray method in this paper. |