预训练的SAM作为图像分割的数据增强

IF 8.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Junjun Wu, Yunbo Rao, Shaoning Zeng, Bob Zhang
{"title":"预训练的SAM作为图像分割的数据增强","authors":"Junjun Wu,&nbsp;Yunbo Rao,&nbsp;Shaoning Zeng,&nbsp;Bob Zhang","doi":"10.1049/cit2.12381","DOIUrl":null,"url":null,"abstract":"<p>Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset. Initially, data augmentation mainly involved some simple transformations of images. Later, in order to increase the diversity and complexity of data, more advanced methods appeared and evolved to sophisticated generative models. However, these methods required a mass of computation of training or searching. In this paper, a novel training-free method that utilises the Pre-Trained Segment Anything Model (SAM) model as a data augmentation tool (PTSAM-DA) is proposed to generate the augmented annotations for images. Without the need for training, it obtains prompt boxes from the original annotations and then feeds the boxes to the pre-trained SAM to generate diverse and improved annotations. In this way, annotations are augmented more ingenious than simple manipulations without incurring huge computation for training a data augmentation model. Multiple comparative experiments on three datasets are conducted, including an in-house dataset, ADE20K and COCO2017. On this in-house dataset, namely Agricultural Plot Segmentation Dataset, maximum improvements of 3.77% and 8.92% are gained in two mainstream metrics, mIoU and mAcc, respectively. Consequently, large vision models like SAM are proven to be promising not only in image segmentation but also in data augmentation.</p>","PeriodicalId":46211,"journal":{"name":"CAAI Transactions on Intelligence Technology","volume":"10 1","pages":"268-282"},"PeriodicalIF":8.4000,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cit2.12381","citationCount":"0","resultStr":"{\"title\":\"Pre-trained SAM as data augmentation for image segmentation\",\"authors\":\"Junjun Wu,&nbsp;Yunbo Rao,&nbsp;Shaoning Zeng,&nbsp;Bob Zhang\",\"doi\":\"10.1049/cit2.12381\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset. Initially, data augmentation mainly involved some simple transformations of images. Later, in order to increase the diversity and complexity of data, more advanced methods appeared and evolved to sophisticated generative models. However, these methods required a mass of computation of training or searching. In this paper, a novel training-free method that utilises the Pre-Trained Segment Anything Model (SAM) model as a data augmentation tool (PTSAM-DA) is proposed to generate the augmented annotations for images. Without the need for training, it obtains prompt boxes from the original annotations and then feeds the boxes to the pre-trained SAM to generate diverse and improved annotations. In this way, annotations are augmented more ingenious than simple manipulations without incurring huge computation for training a data augmentation model. Multiple comparative experiments on three datasets are conducted, including an in-house dataset, ADE20K and COCO2017. On this in-house dataset, namely Agricultural Plot Segmentation Dataset, maximum improvements of 3.77% and 8.92% are gained in two mainstream metrics, mIoU and mAcc, respectively. Consequently, large vision models like SAM are proven to be promising not only in image segmentation but also in data augmentation.</p>\",\"PeriodicalId\":46211,\"journal\":{\"name\":\"CAAI Transactions on Intelligence Technology\",\"volume\":\"10 1\",\"pages\":\"268-282\"},\"PeriodicalIF\":8.4000,\"publicationDate\":\"2024-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cit2.12381\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"CAAI Transactions on Intelligence Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/cit2.12381\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"CAAI Transactions on Intelligence Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cit2.12381","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

数据增强通过扩大数据集的规模和多样性,在深度神经模型训练中起着重要的作用。最初,数据增强主要涉及一些简单的图像变换。后来,为了增加数据的多样性和复杂性,出现了更先进的方法,并演变为复杂的生成模型。然而,这些方法需要大量的训练或搜索计算。本文提出了一种利用预训练分段任意模型(Pre-Trained Segment Anything Model, SAM)模型作为数据增强工具的无需训练的方法(PTSAM-DA)来生成图像的增强注释。在不需要训练的情况下,它从原始注释中获取提示框,然后将这些提示框提供给预训练的SAM,以生成多样化和改进的注释。通过这种方式,注释的增强比简单的操作更巧妙,而不会产生用于训练数据增强模型的大量计算。在内部数据集ADE20K和COCO2017三个数据集上进行了多次对比实验。在该内部数据集(即农业地块分割数据集)上,mIoU和mAcc两个主流指标分别获得了3.77%和8.92%的最大改进。因此,像SAM这样的大视觉模型不仅在图像分割方面,而且在数据增强方面都有很好的应用前景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Pre-trained SAM as data augmentation for image segmentation

Pre-trained SAM as data augmentation for image segmentation

Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset. Initially, data augmentation mainly involved some simple transformations of images. Later, in order to increase the diversity and complexity of data, more advanced methods appeared and evolved to sophisticated generative models. However, these methods required a mass of computation of training or searching. In this paper, a novel training-free method that utilises the Pre-Trained Segment Anything Model (SAM) model as a data augmentation tool (PTSAM-DA) is proposed to generate the augmented annotations for images. Without the need for training, it obtains prompt boxes from the original annotations and then feeds the boxes to the pre-trained SAM to generate diverse and improved annotations. In this way, annotations are augmented more ingenious than simple manipulations without incurring huge computation for training a data augmentation model. Multiple comparative experiments on three datasets are conducted, including an in-house dataset, ADE20K and COCO2017. On this in-house dataset, namely Agricultural Plot Segmentation Dataset, maximum improvements of 3.77% and 8.92% are gained in two mainstream metrics, mIoU and mAcc, respectively. Consequently, large vision models like SAM are proven to be promising not only in image segmentation but also in data augmentation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CAAI Transactions on Intelligence Technology
CAAI Transactions on Intelligence Technology COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
11.00
自引率
3.90%
发文量
134
审稿时长
35 weeks
期刊介绍: CAAI Transactions on Intelligence Technology is a leading venue for original research on the theoretical and experimental aspects of artificial intelligence technology. We are a fully open access journal co-published by the Institution of Engineering and Technology (IET) and the Chinese Association for Artificial Intelligence (CAAI) providing research which is openly accessible to read and share worldwide.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信