| Objective: Existing medical images contain a large amount of information related to diseases,such as disease types,lesion areas,etc.,which can be used as a reference for subsequent clinical diagnosis when necessary.Medical image retrieval technology can retrieve the medical image most similar to the current image from the database in real time,providing reference for inexperienced doctors.However,some images are similar in appearance but different in subtle features,which should be classified into different categories in essence.Existing medical image retrieval methods are not enough to extract highly discriminative features from such medical images,resulting in inaccurate retrieval.Therefore,it is necessary to propose a method that can extract high discriminative features to achieve the distinction of medical images with high appearance similarity,which is used to improve the accuracy of medical image retrieval and provide doctors with more accurate auxiliary diagnosis basis.Methods: In this thesis,a medical image retrieval method based on deep metric learning is proposed,and the feature extraction network and metric loss function are designed and improved.In this study,we design a novel Channel Grouping Multi-scale Branching Block(CGMB Block)with multiple applications of concatenation and convolution inside the structure,and incorporate a spatial attention mechanism to capture subtle feature information in high appearance similar medical images,which is combined with the backbone network of Res Net-50 to form a Channel Grouping Multi-scale Branching Model based on the Attention mechanism(ACGMB Model).In order to make the positive and negative weight factors in Multi-Similarity Loss(MS Loss)adjust adaptively,Adaptive Multi-Similarity Loss(AMS Loss)is proposed;in order to complement the constraints between the loss function on samples and sample clusters,Adaptive MultiConstraint Loss(AMC Loss)is proposed to complement the constraint between samples and sample clusters.Results: The experimental results show that the ACGMB Model proposed in this thesis has achieved the highest Recall@1 values of 90.10%,81.89% and 98.19% on the three datasets of ISIC 2019 Challenge,MURA and Chest X-ray14 respectively,and the Normalized Mutual Information(NMI)values reached 59.78,76.13 and 95.20 respectively,which shows that ACGMB Model has a strong feature extraction ability.Compared with other methods,in most cases,ACGMB Model can achieve better retrieval performance with lower feature dimensions,which proves that ACGMB Model has a stronger learning ability,and the learned information is all key information.Compared with other feature extraction networks,the ACGMB Model proposed in this thesis achieves almost equivalent retrieval results with fewer parameters and higher efficiency.When the loss function of ACGMB Model is AMS Loss,the Recall@1 values on the three datasets can reach 90.17%,81.86% and 98.21% respectively,indicating that AMS Loss can guide the model to learn More discriminative feature embeddings.Using the proposed AMC Loss as the loss function of ACGMB Model,the Recall@1 values on the three data sets are further improved to 90.43%,83.33% and 98.25%,which shows that AMC Loss has better feature discrimination ability.The visualization of the retrieval results shows that the combination of ACGMB Model and AMC Loss can retrieve more correct results.Conclusion: This study proposes a medical image retrieval method based on deep metric learning,which is a combination of ACGMB Model and AMC Loss.ACGMB Model can capture the distinguishing subtle feature information in similar medical images,and the stronger feature discrimination ability of AMC Loss further improves the retrieval performance.The method in this thesis has a large improvement in retrieval performance compared with other methods and can achieve more accurate differentiated retrieval of high appearance similar medical images,and has the potential to provide diagnostic reference. |