About this item:

209 Views | 201 Downloads

Author Notes:

Xiaofeng Yang : xiaofeng.yang@emory.edu

Tian Liu : tliu34@emory.edu.

Subjects:

Research Funding:

This research is supported in part by the Department of Defense (DoD) Prostate Cancer Research Program (PCRP) Award W81XWH-13-1-0269; and Winship Cancer Institute.

Keywords:

  • Prostate segmentation
  • ultrasound
  • anatomical feature
  • machine learning

3D Transrectal Ultrasound (TRUS) Prostate Segmentation Based on Optimal Feature Learning Framework

Tools:

Journal Title:

Proceedings of SPIE

Volume:

Volume 9784

Publisher:

Type of Work:

Article | Final Publisher PDF

Abstract:

We propose a 3D prostate segmentation method for transrectal ultrasound (TRUS) images, which is based on patch-based feature learning framework. Patient-specific anatomical features are extracted from aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to train the kernel support vector machine (KSVM). The well-trained SVM was used to localize the prostate of the new patient. Our segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentations (gold standard). The mean volume Dice overlap coefficient was 89.7%. In this study, we have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.

Copyright information:

© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

Export to EndNote