Breast cancer is cancer with extremely high mortality.In recent years,the incidence of breast cancer in China has continued to rise.Clinical data suggest that early detection and early treatment will significantly improve patient survival.Breast ultrasound imaging technology has gradually become the mainstream way of breast cancer diagnosis because of its low price and safe non-invasive characteristics.The emergence of computer-aided diagnosis system can better assist doctors in reading and diagnosis,reduce the rate of misdiagnosis,and it is effective in improving the objectivity and accuracy of detection.The breast ultrasound image segmentation based on deep learning has flourished in recent years.Although its accuracy has far exceeded the traditional method,there are still some shortcomings.At present,the following problems mainly exist.Firstly,for single-target tumor segmentation,because the ultrasound image has low contrast and high noise,and there are often low echo regions similar to tumors in the image,the model is susceptible to the problem and the segmentation result is shifted or misaligned.At the same time,there is a widespread phenomenon of segmentation results,the edges are not fine enough,and there is no spatial consistency.Secondly,for multi-target tumor segmentation,when multiple tumor targets are close together,it is difficult for the model to separate the boundaries between objects,which often leads to their adhesion to each other.When the tumors are far apart,most models can only be segmented.Out of some of the goals,lack of a global perspective,prone to missed inspections.Finally,the performance of the mainstream model on multi-scale tumor detection is not satisfactory.The smaller size of the tumor in the image has a higher rate of missed detection,and it is difficult to achieve a complete segmentation for a larger scale tumor.To solve the above problems,this paper proposes a multi-target and multi-tumor segmentation method based on prior medical knowledge for the automatic analysis of breast ultrasound image tumors.This method improves the input module,positioning module and output module of the model based on Mask R-CNN.The method can automatically locate and segment the tumor area of the breast ultrasound image.The main research work can be summarized as follows:(1)FPN was improved to improve the overall performance of the model.Firstly,the size of the anchors in the RPN is modified to match the breast ultrasound image dataset,Wavelet transform and elastic deformation are used for image enhancement to form a good performance baseline model.Aiming at the relationship between pyramid layers and layers of feature map output in FPN,the five-layer output is subjected to feature fusion equalization,without changing the structure and size of the final output,and adopting an intuitive summation and averaging method to get rid of the adjacent layer combination.The method effectively combines the high-level semantic information with the lower-level position information,so that each layer has relatively balanced feature information,which improves the average accuracy of the model.(2)Multi-scale adaptive breast ultrasound tumor recognition method was proposed.Firstly,the mammary gland layer is located by phase congruence,and the probability map of tumor location prediction is generated by background saliency.Then,it is fused with the output of the baseline model to obtain the segmentation result constrained by prior medical knowledge.The second point is to replace the convolutional layer in the RPN with atrous convolution,and then stack five convolutions of different expansion ratios to form a hierarchical pyramid RPN structure,increase the receptive field while keeping the parameter amount unchanged.The five-way parallel and separate training method can better perceive different size targets and effectively improve the performance of this model in multi-scale tumor segmentation,especially the improvement of small tumor detection accuracy.Finally,the model is connected to the fully connected conditional random field for cooperative training,which improves the expression of segmentation results for details to a certain extent and makes it more refined. |