| With the rapid development of medical imaging technology and computer technology,computer-aided diagnosis technology has gained wide attention from the industry.Medical imaging technology is an important technical tool to help patients to be diagnosed and treated early.As one of the two main pillars of diagnostic imaging technology,DR imaging of the chest is crucial for the diagnosis and screening of common diseases.The efficient and accurate writing the DR image descriptions efficiently and accurately is of great clinical importance for the treatment of common chest diseases.Traditional manual writing methods are time consuming,difficult to train experienced radiologists,and difficult to meet the increasing demand for diagnostic imaging.Therefore,how to effectively assist radiologists in automatically generating accurate and efficient descriptions is a pressing problem that needs to be addressed.The existing methods for generating disease descriptions are mainly divided into two stages:(1)image feature extraction;(2)disease description generation.Since doctors in real medical environments often combine multiple DR images,a single model with a simple stitching feature extraction method is not sufficient for effective feature extraction from images.In the medical diagnosis process,the patient’s past medical history is extremely important to guide the doctor’s current diagnosis.On the other hand,the traditional EncoderDecoder network model focuses on the fluency of the generated text rather than the description of abnormalities in the images,but for the task of disease description generation,the accurate description of the abnormalities in the images is the top priority.In order to solve the problems of the current mainstream methods for generating descriptions,this paper improves the feature fusion method of convolutional neural networks by introducing the patient’s past medical history and adjusting the structure of the traditional generation model to guide the generation of descriptions more efficiently and accurately.(1): In order to solve the problem that the simple splicing feature fusion method and a single model are not sufficient for effective image feature extraction,This paper propose A multi-view multi-model based image feature deep fusion method for image feature extraction.(2): In order to more closely match the real medical diagnosis process,address the over-reliance of existing report generation models on patient history,and at the same time guide the report generation models to pay more attention to the description of abnormalities in images,this paper proposes a previous medical history and disease label-assisted approach for diagnostic report generationTo demonstrate the effectiveness of the method in this paper,this paper have experiments were conducted on the MIMIC-CXR,Chest-ray14,and NLMCXR datasets.In the disease prediction experiments,the accuracy of the model reached to 84.7%,and the disease prediction performance was better than the current mainstream algorithms.In the disease description generation experiments,not only the effects of textual methods were compared with current mainstream methods in terms of common quantitative metrics of text generation,but also the qualitative analysis of labeling accuracy and generated disease descriptions were performed,The methods in this paper all achieved leading experimental results. |