| China ranks first in global apple production,but apple picking is mainly done manually.Due to the high volume of ripe apples during the picking season,manual picking is both labor-intensive and costly.Therefore,using apple-picking robots to complete this task is crucial for reducing labor costs and increasing revenue for the apple industry.The primary key to achieving this goal is to detect and locate apples quickly and accurately.In response to the accuracy issues present in current apple detection algorithms in complex environments and the real-time deployment of algorithms on embedded devices,this study conducted research on improving detection precision and lightweight model deployment.We propose an apple target detection method for real-time detection on the Jetson Nano embedded development board.The main focus is on the following points:(1)Research on apple detection algorithms.Analyzing the problems present in China’s apple industry and the application of apple-picking robots in complex environments,we compared and analyzed traditional apple detection methods with deep learning detection methods.To meet the visual detection needs of picking robots based on embedded development boards,we chose the one-stage network YOLOX-Tiny network with high detection accuracy and speed as a benchmark and made improvements.(2)High-precision apple detection network design.We introduced the YOLOX-Tiny network structure and anchor-free,proposed a CBAM attention module that includes channel attention mechanisms and spatial attention mechanisms,and suggested the addition of adaptive feature fusion modules to improve detection accuracy.We also switched to CIo U box regression Loss functions to enable more precise positioning of apple-picking targets.(3)Lightweight apple detection network design and validation.To solve the problem of current detection algorithms not being able to perform real-time detection on embedded devices,we developed the Shufflenet V2-YOLOX algorithm by replacing the backbone with a lightweight Shufflenet V2 network and deleting a layer of feature extraction.The algorithm improved detection accuracy and speed.We collected apple images under complex environments including daytime,nighttime,bagging,occlusion,and overlapping as the dataset,and trained the model with high detection accuracy,precision,recall,and a detection speed of 65 frames per second.Compared to YOLOX-Tiny network,detection accuracy improved by 6.24%,and detection speed improved by10 frames per second.When compared to other excellent lightweight networks such as YOLOv5-s,Efficientdet-d0,YOLOv4-Tiny,and Mobilenet-YOLOv4-lite,our network showed better detection accuracy and speed.(4)Experimental validation of the apple-picking robot vision detection system.We used Jetson Nano for environmental configuration and model performance verification.We used Tensor RT to quantize and optimize the model,generate serialized engines for Jetson Nano hardware,and further strengthen the model’s inference speed.The results showed that the detection speed on Jetson Nano can reach 26 frames per second,fully meeting the real-time and accuracy needs of the apple-picking robot.Finally,we conducted experiments on the apple-picking robot using the Jetson Nano detection platform and STM32 controller to verify the apple-picking robot vision system. |