Font Size: a A A

Self-supervised Learning Based On Medical Anatomy-Oritented Image Data

Posted on:2022-11-25Degree:MasterType:Thesis
Country:ChinaCandidate:M M ZhuFull Text:PDF
GTID:2480306764466764Subject:Computer Software and Application of Computer
Abstract/Summary:PDF Full Text Request
With the emergence of large-scale annotated data sets,deep learning methods have achieved remarkable results in the field of computer vision.However,the collection of datasets and manual labeling of data require a lot of labor costs.In order to avoid wasting a lot of resources on large-scale data sets,unsupervised learning,self-supervised learning and transfer learning have received extensive attention.In the current context of big data,compared to expensive labeled data,unlabeled data is cheap and easy to obtain.However,due to the lack of supervised signals,unsupervised learning is usually difficult to ensure discriminative characteristics.In recent years,some new paradigms of self-supervised learning that automatically generate supervision signals based on certain attributes of data to guide feature learning have gradually attracted attention.Compared with classic ma-chine learning methods,self-supervised learning can make use of a large amount of avail-able data as much as possible,while avoiding or greatly reducing the workload of manual labeling.Self-supervised learning has emerged as a powerful tool for pretraining deep networks on a large amount of unlabeled data,prior to fine-tuning for the target task with limited annotations.The distance between the pretraining pretext and target downstream tasks is crucial.To make use of the unique properties of medical image data,various pre-text tasks have been proposed.However,they rarely paid attention to data with anatomy-oriented imaging planes,e.g.,standard cardiac MRI views.As such imaging planes are defined with respect to the anatomy of the imaged organ,pretext tasks that effectively utilize this information are expected to be more r we propose a new framework for self-supervised learning,which overcomes the limitations of the model and data domain when designing and comparing different tasks,and combines the structure of the self-supervised model with the final specific goal The fine-tuning model of the task is separated,and a universal self-supervised task based on medical image data of anatomical-oriented imag-ing plane is constructed.Specifically,we propose two generic pretext tasks exclusively for this group of medical image data based on their spatial relationship.The first is to learn the relative orientation between the imaging planes and is implemented as regressing the heatmaps defined according to the intersecting lines between the imaging planes.The sec-ond pretext task is complementary to the first and exploits the spatial relationship among parallel imaging planes to regress the relative slice locations within a stack.Both pretext tasks are conceptually straightforward and easy to implement,and can be combined in a multitask setting for better representation learning.Thorough experiments on two differ-ent anatomical structures(heart and knee)and representative downstream tasks(semantic segmentation and classification)demonstrate that the proposed pretext tasks are effective in pretraining deep networks for remarkably boosted performance on the target tasks,and superior to other recently proposed competitors.
Keywords/Search Tags:Anatomy-oriented imaging plane, medical image analysis, self-supervised learning
PDF Full Text Request
Related items