{"title":"Salient object detection dataset with adversarial attacks for genetic programming and neural networks.","authors":"Matthieu Olague, Gustavo Olague, Roberto Pineda, Gerardo Ibarra-Vazquez","doi":"10.1016/j.dib.2024.111043","DOIUrl":null,"url":null,"abstract":"<p><p>Machine learning is central to mainstream technology and outperforms classical approaches to handcrafted feature design. Aside from its learning process for artificial feature extraction, it has an end-to-end paradigm from input to output, reaching outstandingly accurate results. However, security concerns about its robustness to malicious and imperceptible perturbations have drawn attention since humans or machines can change the predictions of programs entirely. Salient object detection is a research area where deep convolutional neural networks have proven effective but whose trustworthiness represents a significant issue requiring analysis and solutions to hackers' attacks. This dataset is an image repository containing five different image databases to evaluate adversarial robustness by introducing 12 adversarial examples, each leveraging a known adversarial attack or noise perturbation. The dataset comprises 56,387 digital images, resulting from applying adversarial examples on subsets of four standard databases (i.e., FT, PASCAL-S, ImgSal, DUTS) and a fifth database (SNPL) portraying a real-world visual attention problem of a shorebird called the snowy plover. We include original and rescaled images from the five databases used with the adversarial examples as part of this dataset for easy access and distribution.</p>","PeriodicalId":10973,"journal":{"name":"Data in Brief","volume":"57 ","pages":"111043"},"PeriodicalIF":1.0000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11647110/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data in Brief","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.dib.2024.111043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning is central to mainstream technology and outperforms classical approaches to handcrafted feature design. Aside from its learning process for artificial feature extraction, it has an end-to-end paradigm from input to output, reaching outstandingly accurate results. However, security concerns about its robustness to malicious and imperceptible perturbations have drawn attention since humans or machines can change the predictions of programs entirely. Salient object detection is a research area where deep convolutional neural networks have proven effective but whose trustworthiness represents a significant issue requiring analysis and solutions to hackers' attacks. This dataset is an image repository containing five different image databases to evaluate adversarial robustness by introducing 12 adversarial examples, each leveraging a known adversarial attack or noise perturbation. The dataset comprises 56,387 digital images, resulting from applying adversarial examples on subsets of four standard databases (i.e., FT, PASCAL-S, ImgSal, DUTS) and a fifth database (SNPL) portraying a real-world visual attention problem of a shorebird called the snowy plover. We include original and rescaled images from the five databases used with the adversarial examples as part of this dataset for easy access and distribution.
期刊介绍:
Data in Brief provides a way for researchers to easily share and reuse each other''s datasets by publishing data articles that: -Thoroughly describe your data, facilitating reproducibility. -Make your data, which is often buried in supplementary material, easier to find. -Increase traffic towards associated research articles and data, leading to more citations. -Open up doors for new collaborations. Because you never know what data will be useful to someone else, Data in Brief welcomes submissions that describe data from all research areas.