Demonstration of segmentation with interactive graph cuts

Yuri Boykov, M. Jolly
{"title":"Demonstration of segmentation with interactive graph cuts","authors":"Yuri Boykov, M. Jolly","doi":"10.1109/ICCV.2001.937703","DOIUrl":null,"url":null,"abstract":"We demonstrate a new technique for general purpose interactive segmentation of N-dimensional images. The method creates two segments: “object” and “background”. The technical details can be found in our paper [1] in this proceedings. Below we concentrate on the actual interface. The user can enter seeds via mouse-operated brush of red (for object) or blue (for background) color. The size of the brush can be changed depending on the size of the object. The user should paint some pixels in the object of interest and some in the background. The seeds provide some clues on what the user intends to segment. As soon as initial seeds are entered, the whole image/volume can be segmented automatically. Basically, the algorithm tries to “predict” how the user would want to paint the rest of the image. Segmentation results are presented by highlighting the object and background segments with red and blue colors. Thus, the object segment appears reddish while the background appears bluish. This gives an intuitive feeling that the algorithm completes the painting started by the user. An optimal segmentation can be very efficiently recomputed when the user adds or removes any seeds. This allows the user to correct any result imperfections quickly via very intuitive interactions. If the algorithm makes a mistake, the user can add a stroke of red paint in the bluish segment (or blue paint in the reddish segment). The new segmentation would very quickly repaint the whole image to comply with additional hints from the user. Our method is not sensitive to exact positioning of seeds. Normally, the results would not change in the seeds are moved within the same object in the image or volume. Our method applies to N-D images (volumes). In case of 3D data the seeds are entered in selected representative slices. The information is automatically propagated between the slices because we compute our optimal segmentation directly in the volume. Thus, the whole volume can be segmented based on seeds in a single slice. 2. Examples","PeriodicalId":429441,"journal":{"name":"Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCV.2001.937703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

We demonstrate a new technique for general purpose interactive segmentation of N-dimensional images. The method creates two segments: “object” and “background”. The technical details can be found in our paper [1] in this proceedings. Below we concentrate on the actual interface. The user can enter seeds via mouse-operated brush of red (for object) or blue (for background) color. The size of the brush can be changed depending on the size of the object. The user should paint some pixels in the object of interest and some in the background. The seeds provide some clues on what the user intends to segment. As soon as initial seeds are entered, the whole image/volume can be segmented automatically. Basically, the algorithm tries to “predict” how the user would want to paint the rest of the image. Segmentation results are presented by highlighting the object and background segments with red and blue colors. Thus, the object segment appears reddish while the background appears bluish. This gives an intuitive feeling that the algorithm completes the painting started by the user. An optimal segmentation can be very efficiently recomputed when the user adds or removes any seeds. This allows the user to correct any result imperfections quickly via very intuitive interactions. If the algorithm makes a mistake, the user can add a stroke of red paint in the bluish segment (or blue paint in the reddish segment). The new segmentation would very quickly repaint the whole image to comply with additional hints from the user. Our method is not sensitive to exact positioning of seeds. Normally, the results would not change in the seeds are moved within the same object in the image or volume. Our method applies to N-D images (volumes). In case of 3D data the seeds are entered in selected representative slices. The information is automatically propagated between the slices because we compute our optimal segmentation directly in the volume. Thus, the whole volume can be segmented based on seeds in a single slice. 2. Examples
演示交互式图切割分割
我们展示了一种用于n维图像的通用交互式分割的新技术。该方法创建了两个片段:“object”和“background”。技术细节可以在我们的论文[1]中找到。下面我们将集中讨论实际的界面。用户可以通过鼠标操作的红色(作为对象)或蓝色(作为背景)的画笔输入种子。画笔的大小可以根据对象的大小来改变。用户应该在感兴趣的对象上画一些像素,在背景上画一些像素。这些种子提供了一些关于用户打算分割什么内容的线索。只要输入初始种子,就可以自动分割整个图像/体积。基本上,该算法试图“预测”用户想要如何绘制图像的其余部分。分割结果通过用红色和蓝色突出显示目标和背景片段来呈现。因此,物体部分呈现红色,而背景呈现蓝色。这给人一种直观的感觉,算法完成了用户开始的绘画。当用户添加或删除任何种子时,可以非常有效地重新计算最佳分割。这允许用户通过非常直观的交互快速纠正任何结果缺陷。如果算法出现错误,用户可以在偏蓝的部分添加一笔红色油漆(或者在偏红的部分添加一笔蓝色油漆)。新的分割将非常迅速地重新绘制整个图像,以符合用户的额外提示。我们的方法对种子的精确定位不敏感。通常情况下,当种子在图像或体积内移动时,结果不会改变。我们的方法适用于N-D图像(卷)。在3D数据的情况下,种子被输入到选定的代表性切片中。信息在切片之间自动传播,因为我们直接在卷中计算最佳分割。因此,整个体积可以基于单个切片中的种子进行分割。2. 例子
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信