人类不需要标记更多的人类:遮挡复制和粘贴遮挡的人类实例分割

Evan Ling, De-Kai Huang, Minhoe Hur
{"title":"人类不需要标记更多的人类:遮挡复制和粘贴遮挡的人类实例分割","authors":"Evan Ling, De-Kai Huang, Minhoe Hur","doi":"10.48550/arXiv.2210.03686","DOIUrl":null,"url":null,"abstract":"Modern object detection and instance segmentation networks stumble when picking out humans in crowded or highly occluded scenes. Yet, these are often scenarios where we require our detectors to work well. Many works have approached this problem with model-centric improvements. While they have been shown to work to some extent, these supervised methods still need sufficient relevant examples (i.e. occluded humans) during training for the improvements to be maximised. In our work, we propose a simple yet effective data-centric approach, Occlusion Copy&Paste, to introduce occluded examples to models during training - we tailor the general copy&paste augmentation approach to tackle the difficult problem of same-class occlusion. It improves instance segmentation performance on occluded scenarios for\"free\"just by leveraging on existing large-scale datasets, without additional data or manual labelling needed. In a principled study, we show whether various proposed add-ons to the copy&paste augmentation indeed contribute to better performance. Our Occlusion Copy&Paste augmentation is easily interoperable with any models: by simply applying it to a recent generic instance segmentation model without explicit model architectural design to tackle occlusion, we achieve state-of-the-art instance segmentation performance on the very challenging OCHuman dataset. Source code is available at https://github.com/levan92/occlusion-copy-paste.","PeriodicalId":72437,"journal":{"name":"BMVC : proceedings of the British Machine Vision Conference. British Machine Vision Conference","volume":"122 1","pages":"329"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Humans need not label more humans: Occlusion Copy & Paste for Occluded Human Instance Segmentation\",\"authors\":\"Evan Ling, De-Kai Huang, Minhoe Hur\",\"doi\":\"10.48550/arXiv.2210.03686\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Modern object detection and instance segmentation networks stumble when picking out humans in crowded or highly occluded scenes. Yet, these are often scenarios where we require our detectors to work well. Many works have approached this problem with model-centric improvements. While they have been shown to work to some extent, these supervised methods still need sufficient relevant examples (i.e. occluded humans) during training for the improvements to be maximised. In our work, we propose a simple yet effective data-centric approach, Occlusion Copy&Paste, to introduce occluded examples to models during training - we tailor the general copy&paste augmentation approach to tackle the difficult problem of same-class occlusion. It improves instance segmentation performance on occluded scenarios for\\\"free\\\"just by leveraging on existing large-scale datasets, without additional data or manual labelling needed. In a principled study, we show whether various proposed add-ons to the copy&paste augmentation indeed contribute to better performance. Our Occlusion Copy&Paste augmentation is easily interoperable with any models: by simply applying it to a recent generic instance segmentation model without explicit model architectural design to tackle occlusion, we achieve state-of-the-art instance segmentation performance on the very challenging OCHuman dataset. Source code is available at https://github.com/levan92/occlusion-copy-paste.\",\"PeriodicalId\":72437,\"journal\":{\"name\":\"BMVC : proceedings of the British Machine Vision Conference. British Machine Vision Conference\",\"volume\":\"122 1\",\"pages\":\"329\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMVC : proceedings of the British Machine Vision Conference. British Machine Vision Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2210.03686\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMVC : proceedings of the British Machine Vision Conference. British Machine Vision Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2210.03686","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

现代目标检测和实例分割网络在拥挤或高度闭塞的场景中挑选人时会出错。然而,这些通常是我们需要检测器工作良好的场景。许多作品都通过以模型为中心的改进来解决这个问题。虽然它们在某种程度上已经被证明是有效的,但这些监督方法在训练期间仍然需要足够的相关示例(即闭塞的人类)来最大化改进。在我们的工作中,我们提出了一种简单而有效的以数据为中心的方法,闭塞复制和粘贴,在训练期间将闭塞的例子引入模型-我们定制了一般的复制和粘贴增强方法来解决同类闭塞的难题。它通过利用现有的大规模数据集,“免费”地提高了闭塞场景下的实例分割性能,而不需要额外的数据或手动标记。在一项原则性研究中,我们展示了各种建议的复制和粘贴增强的附加组件是否确实有助于提高性能。我们的遮挡复制和粘贴增强功能很容易与任何模型互操作:通过简单地将其应用于最近的通用实例分割模型,而无需显式的模型架构设计来解决遮挡问题,我们在非常具有挑战性的ochhuman数据集上实现了最先进的实例分割性能。源代码可从https://github.com/levan92/occlusion-copy-paste获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Humans need not label more humans: Occlusion Copy & Paste for Occluded Human Instance Segmentation
Modern object detection and instance segmentation networks stumble when picking out humans in crowded or highly occluded scenes. Yet, these are often scenarios where we require our detectors to work well. Many works have approached this problem with model-centric improvements. While they have been shown to work to some extent, these supervised methods still need sufficient relevant examples (i.e. occluded humans) during training for the improvements to be maximised. In our work, we propose a simple yet effective data-centric approach, Occlusion Copy&Paste, to introduce occluded examples to models during training - we tailor the general copy&paste augmentation approach to tackle the difficult problem of same-class occlusion. It improves instance segmentation performance on occluded scenarios for"free"just by leveraging on existing large-scale datasets, without additional data or manual labelling needed. In a principled study, we show whether various proposed add-ons to the copy&paste augmentation indeed contribute to better performance. Our Occlusion Copy&Paste augmentation is easily interoperable with any models: by simply applying it to a recent generic instance segmentation model without explicit model architectural design to tackle occlusion, we achieve state-of-the-art instance segmentation performance on the very challenging OCHuman dataset. Source code is available at https://github.com/levan92/occlusion-copy-paste.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信