具有对抗性攻击的显著目标检测数据集,用于遗传规划和神经网络。

IF 1 Q3 MULTIDISCIPLINARY SCIENCES
Data in Brief Pub Date : 2024-11-04 eCollection Date: 2024-12-01 DOI:10.1016/j.dib.2024.111043
Matthieu Olague, Gustavo Olague, Roberto Pineda, Gerardo Ibarra-Vazquez
{"title":"具有对抗性攻击的显著目标检测数据集,用于遗传规划和神经网络。","authors":"Matthieu Olague, Gustavo Olague, Roberto Pineda, Gerardo Ibarra-Vazquez","doi":"10.1016/j.dib.2024.111043","DOIUrl":null,"url":null,"abstract":"<p><p>Machine learning is central to mainstream technology and outperforms classical approaches to handcrafted feature design. Aside from its learning process for artificial feature extraction, it has an end-to-end paradigm from input to output, reaching outstandingly accurate results. However, security concerns about its robustness to malicious and imperceptible perturbations have drawn attention since humans or machines can change the predictions of programs entirely. Salient object detection is a research area where deep convolutional neural networks have proven effective but whose trustworthiness represents a significant issue requiring analysis and solutions to hackers' attacks. This dataset is an image repository containing five different image databases to evaluate adversarial robustness by introducing 12 adversarial examples, each leveraging a known adversarial attack or noise perturbation. The dataset comprises 56,387 digital images, resulting from applying adversarial examples on subsets of four standard databases (i.e., FT, PASCAL-S, ImgSal, DUTS) and a fifth database (SNPL) portraying a real-world visual attention problem of a shorebird called the snowy plover. We include original and rescaled images from the five databases used with the adversarial examples as part of this dataset for easy access and distribution.</p>","PeriodicalId":10973,"journal":{"name":"Data in Brief","volume":"57 ","pages":"111043"},"PeriodicalIF":1.0000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11647110/pdf/","citationCount":"0","resultStr":"{\"title\":\"Salient object detection dataset with adversarial attacks for genetic programming and neural networks.\",\"authors\":\"Matthieu Olague, Gustavo Olague, Roberto Pineda, Gerardo Ibarra-Vazquez\",\"doi\":\"10.1016/j.dib.2024.111043\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Machine learning is central to mainstream technology and outperforms classical approaches to handcrafted feature design. Aside from its learning process for artificial feature extraction, it has an end-to-end paradigm from input to output, reaching outstandingly accurate results. However, security concerns about its robustness to malicious and imperceptible perturbations have drawn attention since humans or machines can change the predictions of programs entirely. Salient object detection is a research area where deep convolutional neural networks have proven effective but whose trustworthiness represents a significant issue requiring analysis and solutions to hackers' attacks. This dataset is an image repository containing five different image databases to evaluate adversarial robustness by introducing 12 adversarial examples, each leveraging a known adversarial attack or noise perturbation. The dataset comprises 56,387 digital images, resulting from applying adversarial examples on subsets of four standard databases (i.e., FT, PASCAL-S, ImgSal, DUTS) and a fifth database (SNPL) portraying a real-world visual attention problem of a shorebird called the snowy plover. We include original and rescaled images from the five databases used with the adversarial examples as part of this dataset for easy access and distribution.</p>\",\"PeriodicalId\":10973,\"journal\":{\"name\":\"Data in Brief\",\"volume\":\"57 \",\"pages\":\"111043\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2024-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11647110/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Data in Brief\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1016/j.dib.2024.111043\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/12/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q3\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data in Brief","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.dib.2024.111043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

机器学习是主流技术的核心,在手工功能设计方面优于经典方法。除了人工特征提取的学习过程外,它还具有从输入到输出的端到端范式,达到了非常准确的结果。然而,由于人类或机器可以完全改变程序的预测,因此对其对恶意和难以察觉的扰动的鲁棒性的安全担忧引起了人们的关注。突出目标检测是深度卷积神经网络被证明有效的研究领域,但其可信度是一个需要分析和解决黑客攻击的重要问题。该数据集是一个包含五个不同图像数据库的图像存储库,通过引入12个对抗示例来评估对抗鲁棒性,每个示例利用已知的对抗攻击或噪声扰动。该数据集包括56,387张数字图像,这些图像是在四个标准数据库(即FT, PASCAL-S, ImgSal, DUTS)和第五个数据库(SNPL)的子集上应用对抗性示例产生的,该数据库描绘了一种名为雪鸻的滨鸟的真实视觉注意力问题。我们将来自五个数据库的原始和重新缩放的图像与对抗示例一起作为该数据集的一部分,以便于访问和分发。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Salient object detection dataset with adversarial attacks for genetic programming and neural networks.

Machine learning is central to mainstream technology and outperforms classical approaches to handcrafted feature design. Aside from its learning process for artificial feature extraction, it has an end-to-end paradigm from input to output, reaching outstandingly accurate results. However, security concerns about its robustness to malicious and imperceptible perturbations have drawn attention since humans or machines can change the predictions of programs entirely. Salient object detection is a research area where deep convolutional neural networks have proven effective but whose trustworthiness represents a significant issue requiring analysis and solutions to hackers' attacks. This dataset is an image repository containing five different image databases to evaluate adversarial robustness by introducing 12 adversarial examples, each leveraging a known adversarial attack or noise perturbation. The dataset comprises 56,387 digital images, resulting from applying adversarial examples on subsets of four standard databases (i.e., FT, PASCAL-S, ImgSal, DUTS) and a fifth database (SNPL) portraying a real-world visual attention problem of a shorebird called the snowy plover. We include original and rescaled images from the five databases used with the adversarial examples as part of this dataset for easy access and distribution.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Data in Brief
Data in Brief MULTIDISCIPLINARY SCIENCES-
CiteScore
3.10
自引率
0.00%
发文量
996
审稿时长
70 days
期刊介绍: Data in Brief provides a way for researchers to easily share and reuse each other''s datasets by publishing data articles that: -Thoroughly describe your data, facilitating reproducibility. -Make your data, which is often buried in supplementary material, easier to find. -Increase traffic towards associated research articles and data, leading to more citations. -Open up doors for new collaborations. Because you never know what data will be useful to someone else, Data in Brief welcomes submissions that describe data from all research areas.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信