| In recent years,with the rapid development of artificial intelligence technology,Deep Convolutional Neural Network(DCNN)has been widely applied in image processing,robotics and other related fields.At present,DCNNs have problems such as feature redundancy,decreased learning efficiency,and difficulty in manually adjusting parameters.In order to solve the above problems,based on the depth convolution neural network,this paper carries out research from three aspects of network feature extraction,training efficiency and hyperparameter automatic adjustment to improve the prediction accuracy,robustness and generalization of the depth convolution network,and applies the proposed algorithm to image classification,segmentation,target detection and object recognition in the actual scene.The research work of this dissertation includes the following parts:(1)In response to the problem of feature redundancy in DCNN,an improved DCNN algorithm based on residual Dropout convolution is proposed,aiming to achieve the simultaneous use of Dropout,convolution layer,and Batch Norm layer.The proposed algorithm includes residual Dropout paths and convolutional paths,which are randomly switched during training,resulting in random changes in network depth and enhancing the diversity of feature extraction.The residual path randomly selects input features,compresses and amplifies them,increasing the diversity of feature inputs in the downstream parameter layer;Only convolutional paths are used in prediction to ensure stable prediction results.The experimental results in the image classification dataset Cifar100,Cifar10 and Caltech256 show that the proposed algorithm can accelerate the convergence speed of the loss function and effectively improve the prediction accuracy of the network.(2)Aiming at the problem that the saturation training efficiency of filters in DCNN decreases due to the fixed positions of the activation region and saturation region of the activation function,an improved DCNN algorithm based on random activation function is proposed.By combining the different primary and secondary activation function in the active region and the saturated region,the filters in the saturated region can participate in the weight update again to increase their generated features;By using two linear function as sub activation function,the random depth network is expanded,so that it is no longer limited to residual structure.The experimental results in the image classification dataset Cifar100 and Caltech256 show that the proposed algorithm can effectively improve the training efficiency of the network,alleviate overfitting,and has better generalization and accuracy than the random depth networks of Res Net18,Res Net34 and Res Net101.(3)Aiming at the problem that the existing penalty terms of loss function do not consider the difference between filters in DCNN,a regularization algorithm of loss function based on stochastic optimization is proposed.The proposed algorithm constructs a penalty term of the loss function based on the filter distribution,maps the filter to the parameter space,and uses the gradient descent process to increase the difference between filters,aiming to expand the range of solution space and the actual capacity of the network,increase the number of active filters,and avoid the stop of weight update caused by the disappearance of gradient in training.The experimental results show that the proposed algorithm can effectively improve the accuracy and anti-interference performance of the network in the image classification datasets Cifar100 and Caltech256;The results in the target detection dataset Pascal VOC(07+12)show that the accuracy of the pre-trained network and the harmonic average of the recall rate are improved,and the algorithm has good generalization performance.(4)Aiming at the problems of poor training stability and complex hyperparameter adjustment after introducing the penalty term of loss function,a DCNN regularization algorithm based on ensemble learning is proposed to simplify hyperparameter adjustment and improve the stability of training.The proposed algorithm regards the filter and convolution layer as weak learners and integrated learners.For each filter,the difference between it and multiple filters in the same layer is considered at the same time,and the loss function penalty term is constructed to improve the regularization efficiency.At the same time,by adjusting the attenuation function,the performance improvement of the proposed algorithm is more stable,and the hyperparameter setting is simplified.The experimental results on the image classification dataset Cifar100,Cifar10,Caltech256 and the target detection dataset Pascal VOC(07+12)show that adding the proposed regularization algorithm to the loss function can effectively improve the stability of the network and simplify the adjustment process of hyperparameter when data noise is included.(5)A dynamic random learning rate optimization algorithm for DCNN is proposed to address the difficulty in obtaining the optimal learning rate in DCNN training.By introducing random distribution into the adjustment process of learning rate,the proposed algorithm expands the value of learning rate from preset to uncertain and random distribution without increasing the number of hyperparameter.During training,by continuously monitoring the redundancy of the parameter layer,adjusting the distribution of learning rate,and further adjusting the learning rate in real-time through dimensionality reduction and sampling of the parameter layer,the network can jump out of the bad gradient interval in a timely manner to avoid abnormal training processes.The experimental results on the image classification dataset Cifar100,Caltech256,and TSG salt identification dataset show that the proposed algorithm can maintain the optimal distribution of learning rate,further simplifying the adjustment process of network training,and improving the practicality of the algorithm.Finally,the effectiveness of the proposed algorithm was verified through experiments in actual scenarios. |