A. Ramanathan, L. Pullum, Zubir Husein, Sunny Raj, N. Torosdagli, S. Pattanaik, Sumit Kumar Jha
{"title":"利用自然扰动对计算机视觉算法进行对抗性攻击","authors":"A. Ramanathan, L. Pullum, Zubir Husein, Sunny Raj, N. Torosdagli, S. Pattanaik, Sumit Kumar Jha","doi":"10.1109/IC3.2017.8284294","DOIUrl":null,"url":null,"abstract":"Verifying the correctness of intelligent embedded systems is notoriously difficult due to the use of machine learning algorithms that cannot provide guarantees of deterministic correctness. In this paper, our validation efforts demonstrate that the OpenCV Histogram of Oriented Gradients (HOG) implementation for human detection is susceptible to errors due to both malicious perturbations and naturally occurring fog phenomena. To the best of our knowledge, we are the first to explicitly employ a natural perturbation (like fog) as an adversarial attack using methods from computer graphics. Our experimental results show that computer vision algorithms are susceptible to errors under a small set of naturally occurring perturbations even if they are robust to a majority of such perturbations. Our methods and results may be of interest to the designers, developers and validation teams of intelligent cyber-physical systems such as autonomous cars.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Adversarial attacks on computer vision algorithms using natural perturbations\",\"authors\":\"A. Ramanathan, L. Pullum, Zubir Husein, Sunny Raj, N. Torosdagli, S. Pattanaik, Sumit Kumar Jha\",\"doi\":\"10.1109/IC3.2017.8284294\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Verifying the correctness of intelligent embedded systems is notoriously difficult due to the use of machine learning algorithms that cannot provide guarantees of deterministic correctness. In this paper, our validation efforts demonstrate that the OpenCV Histogram of Oriented Gradients (HOG) implementation for human detection is susceptible to errors due to both malicious perturbations and naturally occurring fog phenomena. To the best of our knowledge, we are the first to explicitly employ a natural perturbation (like fog) as an adversarial attack using methods from computer graphics. Our experimental results show that computer vision algorithms are susceptible to errors under a small set of naturally occurring perturbations even if they are robust to a majority of such perturbations. Our methods and results may be of interest to the designers, developers and validation teams of intelligent cyber-physical systems such as autonomous cars.\",\"PeriodicalId\":147099,\"journal\":{\"name\":\"2017 Tenth International Conference on Contemporary Computing (IC3)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 Tenth International Conference on Contemporary Computing (IC3)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IC3.2017.8284294\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Tenth International Conference on Contemporary Computing (IC3)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC3.2017.8284294","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adversarial attacks on computer vision algorithms using natural perturbations
Verifying the correctness of intelligent embedded systems is notoriously difficult due to the use of machine learning algorithms that cannot provide guarantees of deterministic correctness. In this paper, our validation efforts demonstrate that the OpenCV Histogram of Oriented Gradients (HOG) implementation for human detection is susceptible to errors due to both malicious perturbations and naturally occurring fog phenomena. To the best of our knowledge, we are the first to explicitly employ a natural perturbation (like fog) as an adversarial attack using methods from computer graphics. Our experimental results show that computer vision algorithms are susceptible to errors under a small set of naturally occurring perturbations even if they are robust to a majority of such perturbations. Our methods and results may be of interest to the designers, developers and validation teams of intelligent cyber-physical systems such as autonomous cars.