HiSEG:人工辅助实例分割

IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Muhammed Korkmaz, T. Metin Sezgin
{"title":"HiSEG:人工辅助实例分割","authors":"Muhammed Korkmaz,&nbsp;T. Metin Sezgin","doi":"10.1016/j.cag.2024.104061","DOIUrl":null,"url":null,"abstract":"<div><p>Instance segmentation is a form of image detection which has a range of applications, such as object refinement, medical image analysis, and image/video editing, all of which demand a high degree of accuracy. However, this precision is often beyond the reach of what even state-of-the-art, fully automated instance segmentation algorithms can deliver. The performance gap becomes particularly prohibitive for small and complex objects. Practitioners typically resort to fully manual annotation, which can be a laborious process. In order to overcome this problem, we propose a novel approach to enable more precise predictions and generate higher-quality segmentation masks for high-curvature, complex and small-scale objects. Our human-assisted segmentation method, HiSEG, augments the existing Strong Mask R-CNN network to incorporate human-specified partial boundaries. We also present a dataset of hand-drawn partial object boundaries, which we refer to as “human attention maps”. In addition, the Partial Sketch Object Boundaries (PSOB) dataset contains hand-drawn partial object boundaries which represent curvatures of an object’s ground truth mask with several pixels. Through extensive evaluation using the PSOB dataset, we show that HiSEG outperforms state-of-the art methods such as Mask R-CNN, Strong Mask R-CNN, Mask2Former, and Segment Anything, achieving respective increases of +42.0, +34.9, +29.9, and +13.4 points in AP<span><math><msub><mrow></mrow><mrow><mtext>Mask</mtext></mrow></msub></math></span> metrics for these four models. We hope that our novel approach will set a baseline for future human-aided deep learning models by combining fully automated and interactive instance segmentation architectures.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104061"},"PeriodicalIF":2.5000,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HiSEG: Human assisted instance segmentation\",\"authors\":\"Muhammed Korkmaz,&nbsp;T. Metin Sezgin\",\"doi\":\"10.1016/j.cag.2024.104061\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Instance segmentation is a form of image detection which has a range of applications, such as object refinement, medical image analysis, and image/video editing, all of which demand a high degree of accuracy. However, this precision is often beyond the reach of what even state-of-the-art, fully automated instance segmentation algorithms can deliver. The performance gap becomes particularly prohibitive for small and complex objects. Practitioners typically resort to fully manual annotation, which can be a laborious process. In order to overcome this problem, we propose a novel approach to enable more precise predictions and generate higher-quality segmentation masks for high-curvature, complex and small-scale objects. Our human-assisted segmentation method, HiSEG, augments the existing Strong Mask R-CNN network to incorporate human-specified partial boundaries. We also present a dataset of hand-drawn partial object boundaries, which we refer to as “human attention maps”. In addition, the Partial Sketch Object Boundaries (PSOB) dataset contains hand-drawn partial object boundaries which represent curvatures of an object’s ground truth mask with several pixels. Through extensive evaluation using the PSOB dataset, we show that HiSEG outperforms state-of-the art methods such as Mask R-CNN, Strong Mask R-CNN, Mask2Former, and Segment Anything, achieving respective increases of +42.0, +34.9, +29.9, and +13.4 points in AP<span><math><msub><mrow></mrow><mrow><mtext>Mask</mtext></mrow></msub></math></span> metrics for these four models. We hope that our novel approach will set a baseline for future human-aided deep learning models by combining fully automated and interactive instance segmentation architectures.</p></div>\",\"PeriodicalId\":50628,\"journal\":{\"name\":\"Computers & Graphics-Uk\",\"volume\":\"124 \",\"pages\":\"Article 104061\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Graphics-Uk\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0097849324001961\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Graphics-Uk","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0097849324001961","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

实例分割是图像检测的一种形式,在物体细化、医学图像分析和图像/视频编辑等方面有广泛的应用,所有这些应用都要求很高的精确度。然而,即使是最先进的全自动实例分割算法也往往无法达到这种精度。对于小而复杂的对象来说,性能差距变得尤其令人望而却步。实践者通常会采用全手动标注的方法,这可能是一个费力的过程。为了克服这一问题,我们提出了一种新方法,以实现更精确的预测,并为高曲率、复杂和小型物体生成更高质量的分割掩码。我们的人工辅助分割方法 HiSEG 增强了现有的强掩码 R-CNN 网络,纳入了人类指定的部分边界。我们还提出了一个手绘部分物体边界的数据集,我们将其称为 "人类注意力地图"。此外,部分草图对象边界(PSOB)数据集包含手绘的部分对象边界,它代表了对象的地面实况掩模的几个像素的曲率。通过使用 PSOB 数据集进行广泛评估,我们发现 HiSEG 优于 Mask R-CNN、Strong Mask R-CNN、Mask2Former 和 Segment Anything 等最先进的方法,这四种模型的 APMask 指标分别提高了 +42.0、+34.9、+29.9 和 +13.4。我们希望,通过结合全自动和交互式实例分割架构,我们的新方法将为未来的人类辅助深度学习模型设定基准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

HiSEG: Human assisted instance segmentation

HiSEG: Human assisted instance segmentation

Instance segmentation is a form of image detection which has a range of applications, such as object refinement, medical image analysis, and image/video editing, all of which demand a high degree of accuracy. However, this precision is often beyond the reach of what even state-of-the-art, fully automated instance segmentation algorithms can deliver. The performance gap becomes particularly prohibitive for small and complex objects. Practitioners typically resort to fully manual annotation, which can be a laborious process. In order to overcome this problem, we propose a novel approach to enable more precise predictions and generate higher-quality segmentation masks for high-curvature, complex and small-scale objects. Our human-assisted segmentation method, HiSEG, augments the existing Strong Mask R-CNN network to incorporate human-specified partial boundaries. We also present a dataset of hand-drawn partial object boundaries, which we refer to as “human attention maps”. In addition, the Partial Sketch Object Boundaries (PSOB) dataset contains hand-drawn partial object boundaries which represent curvatures of an object’s ground truth mask with several pixels. Through extensive evaluation using the PSOB dataset, we show that HiSEG outperforms state-of-the art methods such as Mask R-CNN, Strong Mask R-CNN, Mask2Former, and Segment Anything, achieving respective increases of +42.0, +34.9, +29.9, and +13.4 points in APMask metrics for these four models. We hope that our novel approach will set a baseline for future human-aided deep learning models by combining fully automated and interactive instance segmentation architectures.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers & Graphics-Uk
Computers & Graphics-Uk 工程技术-计算机:软件工程
CiteScore
5.30
自引率
12.00%
发文量
173
审稿时长
38 days
期刊介绍: Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on: 1. Research and applications of interactive computer graphics. We are particularly interested in novel interaction techniques and applications of CG to problem domains. 2. State-of-the-art papers on late-breaking, cutting-edge research on CG. 3. Information on innovative uses of graphics principles and technologies. 4. Tutorial papers on both teaching CG principles and innovative uses of CG in education.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信