| Breast cancer,as the most common malignant tumor in the world,has been seriously endangering the health of women worldwide.A large number of studies have shown that screening through medical images can increase the rate of early diagnosis and reduce the mortality of breast cancer.Computer aided diagnosis(CAD)system is a common solution for early diagnosis of breast cancer,which mainly includes two steps: lesion segmentation and lesion classification.With the rapid development of deep learning,CAD systems based on neural networks have achieved good performance in the early diagnosis of breast cancer,and have also greatly improved the efficiency of diagnosis.However,in lesion segmentation task,most neural networks regard feature reuse as the most important factor to improve performance,and ignore focused feature extraction.The rotation robustness of the network is unable to handle arbitrary rotation of the lesion.In the lesion classification task,the deep neural networks are easy to over-fit due to the limited number of samples.To address the problems mentioned above,this paper is committed to cutting in from lesion segmentation and lesion classification of breast cancer to build a two-stage breast cancer early diagnosis method based on deep learning.The main contributions of this article are:1)This paper proposes a Spatial Enhanced Rotation Aware Network for lesion segmentation.The network mainly contains two components,namely a residual spatial attention encoder for efficient feature extraction and a multi-stream rotation-aware decoder for improving the robustness of rotation.The network uses spatial attention to enhance the feature extraction process,and uses asymmetric convolution to enhance the rotation robustness of the network.In order to optimize the model better,this paper also proposes a regularization term with internal and external constraints to assist model training.2)This paper proposes a lesion classification model based on transfer multi-scale features.The model consists of three main modules,namely a multi-channel image enhancement module,a multi-scale feature extraction module based on transfer learning and a multi-scale feature selection module.The model uses transfer learning to make up for the lack of data,and extracts multi-scale features for lesion classification.Finally,the proposed methods have been tested on data sets with medical value to show the effectiveness and advancement of the proposed methods. |