{"title":"Threat of Adversarial Attacks within Deep Learning: Survey","authors":"Roshni singh, Ataussamad","doi":"10.2174/2666255816666221125155715","DOIUrl":null,"url":null,"abstract":"\n\nIn today’s era, Deep Learning has become the center of recent ascent in the field of artificial intelligence and its models. There are various Artificial Intelligence models that can be viewed as needing more strength for adversely defined information sources. It also leads to a high potential security concern in the adversarial paradigm; the DNN can also misclassify inputs that appear to expect in the result. DNN can solve complex problems accurately. It is empaneled in the vision research area to learn deep neural models for many tasks involving critical security applications. We have also revisited the contributions of computer vision in adversarial attacks on deep learning and discussed its defenses. Many of the authors have given new ideas in this area, which has evolved significantly since witnessing the first-generation methods. For optimal correctness of various research and authenticity, the focus is on peer-reviewed articles issued in the prestigious sources of computer vision and deep learning. Apart from the literature review, this paper defines some standard technical terms for non-experts in the field. This paper represents the review of the adversarial attacks via various methods and techniques along with their defenses within the deep learning area and future scope. Lastly, we bring out the survey to provide a viewpoint of the research in this Computer Vision area.\n","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Recent Advances in Computer Science and Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2174/2666255816666221125155715","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0
Abstract
In today’s era, Deep Learning has become the center of recent ascent in the field of artificial intelligence and its models. There are various Artificial Intelligence models that can be viewed as needing more strength for adversely defined information sources. It also leads to a high potential security concern in the adversarial paradigm; the DNN can also misclassify inputs that appear to expect in the result. DNN can solve complex problems accurately. It is empaneled in the vision research area to learn deep neural models for many tasks involving critical security applications. We have also revisited the contributions of computer vision in adversarial attacks on deep learning and discussed its defenses. Many of the authors have given new ideas in this area, which has evolved significantly since witnessing the first-generation methods. For optimal correctness of various research and authenticity, the focus is on peer-reviewed articles issued in the prestigious sources of computer vision and deep learning. Apart from the literature review, this paper defines some standard technical terms for non-experts in the field. This paper represents the review of the adversarial attacks via various methods and techniques along with their defenses within the deep learning area and future scope. Lastly, we bring out the survey to provide a viewpoint of the research in this Computer Vision area.