| Apple detection is a crucial link to the vision system of the apple-picking robot.The existence of complex environments such as light,occlusion of trees or overlapping of multiple fruits causes the apple-picking robot to locate targets quickly and accurately,which affects the execution of the picking task.Therefore,the effective detection of apples in complex environments is of great significance for improving the picking efficiency of apple-picking robots and promoting the development of the apple industry.With the application of deep learning,especially deep convolutional neural networks in the field of image processing,it has become possible to detect apples accurately in complex environments.Compared with shallow machine learning for target detection,only primary or intermediate features—texture features,edge features—can be extracted.And deep learning —especially deep convolutional neural networks—can directly extract the essential features of the target.Therefore,in this paper,research on Apple detection and location technology in a complex environment based on deep learning is carried out.The main results obtained are as follows:(1)Aiming at the shortcomings of networks—such as Faster RCNN and YOLOv3—in traditional image segmentation methods and deep learning-based methods,an improved Mask RCNN method based on apple detection in complex environments is proposed.This method adds a boundary weighted loss function to the original Mask RCNN network,solves the problem of inaccurate boundary segmentation of apples,pre-classifies the appearance of the apples,and obtains the coordinates of the apple center point.By comparing experiments with other apple detection algorithms in complex environments,the results show that the F1 values and segmentation effect of the improved Mask RCNN network are higher than other algorithms,which can accurately detect apples in complex environments.(2)Obtaining the left and right camera internal parameter matrix,distortion coefficient,and the offset and rotation of the right camera relative to the left camera by calibrating the binocular camera,perform center point matching,and using the principle of binocular vision to obtain the three-dimensional coordinates of the apple center point.(3)A batch normalization layer is added on the basis of the VGG-16 network,and the structure is optimized using global average pooling and joint loss function methods.An improved VGGNet model is proposed to classify the appearance of apples after picking.Training on the augmented dataset completes the classification of normal-looking,diseased and rotten apples.The experimental results show that the classification accuracy is higher than the unimproved VGG-16,Alex Net,and Goog Le Net algorithms,which can accurately classify the appearance of apples after picking.(4)Based on Pycharm,Py Qt,and Qt Designer,the apple detection software module for apple-picking robots in complex environments was designed,and apple detection positioning and apple appearance classification in complex environments were realized. |