Font Size: a A A

Study On Image Correspondence Between Preoperative CT Images And Intraoperative Video For RIRS

Posted on:2021-03-21Degree:MasterType:Thesis
Country:ChinaCandidate:Y Q PanFull Text:PDF
GTID:2404330605956680Subject:Engineering
Abstract/Summary:PDF Full Text Request
Kidney stone is a common disease of the human urinary system.A study published in the British International Journal of Urology in 2017 showed that kidney stones are common among the Chinese adults,with about 1 in 17 affected.Retrograde intrarenal surgery(RIRS)is a minimally invasive endoscopic surgery that the ureteroscopy enters the kidney through the natural cavity of the human urinary system to remove stones.There are few postoperative complications for this kind of surgery and it has become one of the main ways to treat kidney stones,especially for the removal of large calculus.In traditional RIRS,doctors firstly use preoperative CT(Computed Tomography)or other preoperative images to check the location of kidney stones and the structure of the kidney,then find and remove stones by evaluating the surgical position with their subjective perception of the surgical endoscopic video images based on their experience.Because there are abundant internal branches in kidney,it is difficult to locate stones and guarantee that all branches have been searched during the surgery process.It requires rich experiences for doctors,and some stones may be missed.Therefore,it is very important to locate the ureteroscopy position through surgical guidance technology to assist doctors for RIRS.At present,information fusion of pre-operative CT images and intra-operative video images has become a major solution for surgical guidance.The key for the fusion of preoperative CT images and intraoperative video information is to find the correspondence between preoperative CT images and intraoperative video.In the field of endoscopic surgery guidance,common methods of matching pre-operative CT images and intra-operative videos include 2D-2D-based image matching and 3D-3D-based point cloud matching.The traditional 2D-2D matching method uses global matching based on all pixel information of the image,which requires a large amount of calculation and is difficult to meet the real-time requirements.The 3D-3D matching method requires that the video image can extract rich and effective feature matching point pairs.However,there are interference impurities such as turbid liquid,air bubbles,flocs,and comminuted stones in the ureteroscopy image and the image quality is not high enough to meet the requirements of 3D-3D matching method.For the above limitations,aiming at matching key anatomical positions between preoperative CT images and intraoperative ureteroscopy videos,this thesis proposes a novel matching method based on depth map.The main research contents are as follows:(1)Virtual endoscopic style image's depth map estimation based on CT image sequence.Because the depth map can reflect the spatial structure of the image scene,this thesis uses the depth map as a bridge to connect the CT image and the video image,and studies to achieve the matching between the two.In this thesis,RGB-D(Depth)mapping dataset of virtual endoscope images to depth images is constructed based on CT image sequences,and a deep learning method is applied to train the convolutional neural network of depth estimation model for virtual endoscopic style image.(2)Ureteroscopy video image's depth map estimation based on style transfer.Since it is difficult to obtain true depth information for ureteroscopy video images,this thesis firstly converts ureteroscopy video images to virtual endoscopic style images through style transfer.Then,based on the above-mentioned depth estimation model trained on the RGB-D mapping dataset,we can obtain depth estimation map corresponding to ureteroscopy video image.(3)Depth map matching method based on advanced semantic features of images.Even if the virtual endoscope image and ureteroscopy video image are converted to depth map,the global match based on the image pixel information still requires a large amount of calculation.Therefore,this thesis uses autoencoder to extract the high-level semantic features of the depth image,and realizes the mapping of the two-dimensional depth map to the one-dimensional feature vector,enabling the rapid matching between the preoperative CT image and the intraoperative video image.The proposed method was evaluated with the CT and video data of RIRS.The experimental results show that,the depth estimation accuracy of the virtual endoscopic style image reaches 94.1%;For the style transfer,the deep learning feature distributions of ureteroscopy transfer image and virtual endoscope image are similar,so as to effectively achieve deep prediction of ureteroscopy video images;For the depth-map based CT-Video matching,by comparing with the traditional method,the performance of the method in this thesis is basically the same as that of the traditional method in Top-1 accuracy,and the performance of the method in this thesis is improved by 26%in Top-10 accuracy.In addition,the matching speed of the proposed method is speed up about 5 times compared with the traditional method.
Keywords/Search Tags:Ureteroscopy, Kidney stones, Image-guided procedure, CT-Video correspondence, Depth estimation, Style transfe
PDF Full Text Request
Related items