Font Size: a A A

Automatic detection of patient identification and positioning errors in radiotherapy treatment using 3D setup images

Posted on:2016-03-31Degree:Ph.DType:Thesis
University:University of California, Los AngelesCandidate:Jani, Shyam ShirishFull Text:PDF
GTID:2474390017477332Subject:Biomedical engineering
Abstract/Summary:
Radiation therapy is a complex healthcare operation that uses ionizing radiation for cancer treatment. The success of modern radiotherapy treatment depends on the correct alignment of the radiation beams with the target treatment region in the patient. In the conventional paradigm of image-guided radiation therapy (IGRT), 2D or 3D setup images are taken immediately prior to treatment and are used by radiation therapy technologists to localize the patient to the same position as defined from the reference planning CT dataset. However, numerous reports in the literature have described errors in during this step, which have led to incorrect treatments and potentially significant clinical harm to the patient. In addition, reported errors likely underestimate the true error rate, as many errors may pass by undetected or are simply not reported. The human factor has been shown to play a large role in these errors, where the setup and planning CT imaging registration is not interpreted or performed correctly as per standard practice.;The hypothesis of the proposed study was to address these human errors by developing a workflow that can algorithmically compare 3D setup and planning CT imaging using image similarity metrics. The proposed system, intended to work in an automated and real-time fashion immediately prior to radiotherapy delivery, has the potential to act as a robust second-check safety interlock to prevent any identification or misalignment errors from reaching the patient. As no additional equipment is required in the treatment room or for patient setup, this system adds virtually no additional complexity, time, or cost to the treatment process. It can be applicable to countries around the world and is particularly relevant for developing nations, where higher error rates have been reported due in part to a smaller number of trained personnel.;To simulate errors across multiple imaging platforms, we utilized both 3D-CBCT and 3D-MVCT images from our TrueBeam and TomoTherapy units, respectively. We gathered CBCT images of 83 head-and-neck (H&N), 100 pelvis, and 57 spine patients treated between 2011 and 2014, and MVCT images of 100 H&N, 100 pelvis, and 56 spine patients treated between 2012 and 2014. Our patient identification study involved the generation of same-patient and different-patient image pairs. Our patient misalignment study involved the translation of the setup image of a same-patient image pair away from the correctly registered alignment. H&N and pelvis image pairs were misaligned by 1cm increments up to 5cm in all six anatomical directions, while spine patients were misaligned to adjacent vertebral bodies.;Chapter 2 describes the development of the image similarity workflow. The system requires inputs of the fused image pair and a mask of the body contour, which was automatically generated using commercially-available software. The workflow involves several pre-processing steps, including image resampling, voxel filtering, CT number remapping for Tomo images, and image filtering. Image similarity is assessed by the use of three commonly-used similarity metrics and two custom-developed algorithmic comparisons. After a feature reduction and normalization step, these metrics are used to train and test five unique classification models as discussed in Chapter 3. Aspects of model evaluation are also discussed in Chapter 3, including misclassification error, k-fold cross-validation, sensitivity, specificity, ROC curves, and more.;Chapter 4 summarizes the results from the workflow. For patient identification, our system can achieve accuracies ranging from 96.4% to 100% across all anatomical sites and both imaging modalities. Spinal misalignments can be detected with less than 5% error across both imaging modalities. Errors of 1.3% and 4.3% have been achieved for 1cm H&N and pelvis shifts, respectively, on MVCT images. For CBCT images, our models generate errors of 9.3/8.5% and 3.1%/3.2% for 1cm and 2cm H&N/pelvis shifts, respectively. Larger shifts result in increased accuracy as well as higher sensitivity and specificity parameters.;Chapter 5 provides an in-depth discussion about the workflow development and its important aspects. There are several potential ways to improve the algorithm in future studies, ranging from specific adjustments in algorithmic design to entirely new approaches of image similarity assessment. Future studies will allow for more robust error detection, contributing towards improved patient safety in radiation therapy treatments.
Keywords/Search Tags:Patient, 3D setup, Image, Errors, Radiation therapy, Radiotherapy, Planning CT, Similarity
Related items