| Female breast cancer has surpassed lung cancer as the leading cause of cancer incidence worldwide in 2020.Correct diagnosis and timely treatment of breast cancer in the early stage can greatly improve the survival rate of patients.Therefore,the correct diagnosis of the early breast has received widespread attention.Ultrasound imaging has become the first choice for clinicians to examine benign and malignant breast tissue in early screening due to its ease of use,small size,non-invasive and low cost.At the same time,the use of contrast-enhanced ultrasound has gradually become widespread.In most clinical diagnosis,combined B-mode ultrasound and contrast-enhanced ultrasound are the main diagnostic methods.Therefore,this thesis attempts to build a suitable network structure from these two bimodal data to improve the accuracy of breast cancer classification.First,this thesis carry out the construction of a dual-mode ultrasound data set.This thesis analyzes the nature of different data in detail,and then proposes a data construction method according to the characteristics of B-mode ultrasound video ultrasound contrast video.in particular,a key frame extraction method based on brightness value is proposed for contrastenhanced ultrasound video.The main principle is to design a new key frame extraction algorithm in consideration of the time length information between different frames based on the brightness value of the ultrasound contrast as the selection metric value of the key frame.For B-mode ultrasound images,this thesis propose a pathological feature self-calibration fusion network,which is mainly a pathological feature self-calibration module.This module uses the self-learning ability of deep learning based on convolution.By designing a reasonable mechanism,the network can automatically match pathological information of different levels with the features of different convolutional layers during training,and finally use pathological features to learn convolutional learning.The process is guided and suppressed to improve the classification accuracy.A temporal regression network for extracting pathological information features of temporal information is proposed.Compared with the traditional natural video,most of the changes of contrast-enhanced ultrasound video are the enhancement changes of pixels,rather than the movement of natural objects.In response to this characteristic,a temporal regression mechanism and a temporal disturbance mechanism are designed to improve the network’s feature extraction ability for temporal information.A dual-branch cooperative suppression network is proposed.B-mode ultrasound and contrast-enhanced ultrasound are used as different modes of display of the same subject.A suitable fusion method will get higher accuracy.Therefore,a bilinear cooperative mechanism is designed to solve this problem.The mechanism can fully integrate the information in the space dimension with the information in the time dimension.Ensure the full use of space information and time information. |