A Point Says a Lot: An Interactive Segmentation Method for MR Prostate via One-Point Labeling.

Jinquan Sun, Yinghuan Shi, Yang Gao, Dinggang Shen
{"title":"A Point Says a Lot: An Interactive Segmentation Method for MR Prostate via One-Point Labeling.","authors":"Jinquan Sun,&nbsp;Yinghuan Shi,&nbsp;Yang Gao,&nbsp;Dinggang Shen","doi":"10.1007/978-3-319-67389-9_26","DOIUrl":null,"url":null,"abstract":"<p><p>In this paper, we investigate if the MR prostate segmentation performance could be improved, by only providing one-point labeling information in the prostate region. To achieve this goal, by asking the physician to first click one point inside the prostate region, we present a novel segmentation method by simultaneously integrating the boundary detection results and the patch-based prediction. Particularly, since the clicked point belongs to the prostate, we first generate the location-prior maps, with two basic assumptions: (1) a point closer to the clicked point should be with higher probability to be the prostate voxel, (2) a point separated by more boundaries to the clicked point, will have lower chance to be the prostate voxel. We perform the Canny edge detector and obtain two location-prior maps from horizontal and vertical directions, respectively. Then, the obtained location-prior maps along with the original MR images are fed into a multi-channel fully convolutional network to conduct the patch-based prediction. With the obtained prostate-likelihood map, we employ a level-set method to achieve the final segmentation. We evaluate the performance of our method on 22 MR images collected from 22 different patients, with the manual delineation provided as the ground truth for evaluation. The experimental results not only show the promising performance of our method but also demonstrate the one-point labeling could largely enhance the results when a pure patch-based prediction fails.</p>","PeriodicalId":90643,"journal":{"name":"Machine learning for multimodal interaction : ... international workshop, MLMI ... : revised selected papers. Workshop on Machine Learning for Multimodal Interaction","volume":"10541 ","pages":"220-228"},"PeriodicalIF":0.0000,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_26","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning for multimodal interaction : ... international workshop, MLMI ... : revised selected papers. Workshop on Machine Learning for Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-319-67389-9_26","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2017/9/7 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

In this paper, we investigate if the MR prostate segmentation performance could be improved, by only providing one-point labeling information in the prostate region. To achieve this goal, by asking the physician to first click one point inside the prostate region, we present a novel segmentation method by simultaneously integrating the boundary detection results and the patch-based prediction. Particularly, since the clicked point belongs to the prostate, we first generate the location-prior maps, with two basic assumptions: (1) a point closer to the clicked point should be with higher probability to be the prostate voxel, (2) a point separated by more boundaries to the clicked point, will have lower chance to be the prostate voxel. We perform the Canny edge detector and obtain two location-prior maps from horizontal and vertical directions, respectively. Then, the obtained location-prior maps along with the original MR images are fed into a multi-channel fully convolutional network to conduct the patch-based prediction. With the obtained prostate-likelihood map, we employ a level-set method to achieve the final segmentation. We evaluate the performance of our method on 22 MR images collected from 22 different patients, with the manual delineation provided as the ground truth for evaluation. The experimental results not only show the promising performance of our method but also demonstrate the one-point labeling could largely enhance the results when a pure patch-based prediction fails.

Abstract Image

Abstract Image

Abstract Image

一点说明很多:一种通过一点标记的MR前列腺交互式分割方法。
在本文中,我们研究了是否可以通过仅提供前列腺区域中的一点标记信息来提高MR前列腺分割性能。为了实现这一目标,通过要求医生首先点击前列腺区域内的一个点,我们提出了一种新的分割方法,通过同时集成边界检测结果和基于补丁的预测。特别是,由于点击点属于前列腺,我们首先生成位置先验图,有两个基本假设:(1)离点击点更近的点应该更有可能成为前列腺体素,(2)与点击点相隔更多边界的点成为前列腺体元的几率更低。我们执行Canny边缘检测器,并分别从水平和垂直方向获得两个位置先验图。然后,将获得的位置先验图与原始MR图像一起馈送到多通道全卷积网络中,以进行基于补丁的预测。利用获得的前列腺似然图,我们采用水平集方法来实现最终的分割。我们评估了我们的方法在从22名不同患者收集的22张MR图像上的性能,手动描绘作为评估的基本事实。实验结果不仅表明了我们方法的良好性能,而且证明了当纯基于补丁的预测失败时,单点标记可以大大提高结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信