Publication
Incorporating minimal user input into deep learning based image segmentation
Downloadable Content
- Persistent URL
- Last modified
- 05/22/2025
- Type of Material
- Authors
-
-
Maysam Shahedi, University of Texas DallasMartin Halicek, University of Texas DallasJames D. Dormer, University of Texas DallasBaowei Fei, Emory University
- Language
- English
- Date
- 2021-01-01
- Publisher
- SPIE Publications
- Publication Version
- Copyright Statement
- © (2020) Society of Photo-Optical Instrumentation Engineers (SPIE).
- Final Published Version (URL)
- Title of Journal or Parent Work
- Volume
- 11313
- Grant/Funding Information
- This research was supported in part by the U.S. National Institutes of Health (NIH) grants (R01CA156775, R01CA204254, R01HL140325, and R21CA231911) and by the Cancer Prevention and Research Institute of Texas (CPRIT) grant RP190588.
- Abstract
- Computer-assisted image segmentation techniques could help clinicians to perform the border delineation task faster with lower inter-observer variability. Recently, convolutional neural networks (CNNs) are widely used for automatic image segmentation. In this study, we used a technique to involve observer inputs for supervising CNNs to improve the accuracy of the segmentation performance. We added a set of sparse surface points as an additional input to supervise the CNNs for more accurate image segmentation. We tested our technique by applying minimal interactions to supervise the networks for segmentation of the prostate on magnetic resonance images. We used U-Net and a new network architecture that was based on U-Net (dual-input path [DIP] U-Net), and showed that our supervising technique could significantly increase the segmentation accuracy of both networks as compared to fully automatic segmentation using U-Net. We also showed DIP U-Net outperformed U-Net for supervised image segmentation. We compared our results to the measured inter-expert observer difference in manual segmentation. This comparison suggests that applying about 15 to 20 selected surface points can achieve a performance comparable to manual segmentation.
- Author Notes
- Keywords
- Research Categories
- Health Sciences, Radiology
- Engineering, Biomedical
- Computer Science
Tools
- Download Item
- Contact Us
-
Citation Management Tools
Relations
- In Collection:
Items
| Thumbnail | Title | File Description | Date Uploaded | Visibility | Actions |
|---|---|---|---|---|---|
|
|
Publication File - vngw6.pdf | Primary Content | 2025-04-30 | Public | Download |