Kathiroli Raja;Sudhakar Theerthagiri;Sriram Venkataraman Swaminathan;Sivassri Suresh;Gunasekaran Raja
{"title":"利用生成模型和自动编码器应对自动驾驶汽车中的对抗性威胁","authors":"Kathiroli Raja;Sudhakar Theerthagiri;Sriram Venkataraman Swaminathan;Sivassri Suresh;Gunasekaran Raja","doi":"10.1109/TCE.2024.3437419","DOIUrl":null,"url":null,"abstract":"The safety and security of Autonomous Vehicles (AVs) have been an active area of interest and study in recent years. To enable human behavior, Deep Learning (DL) and Machine Learning (ML) models are extensively used to make accurate decisions. However, the DL and ML models are susceptible to various attacks, like adversarial attacks, leading to miscalculated decisions. Existing solutions defend against adversarial attacks proactively or reactively. To improve the defense methodologies, we propose a novel hybrid Defense Strategy for Autonomous Vehicles against Adversarial Attacks (DSAA), incorporating both reactive and proactive measures with adversarial training with Neural Structured Learning (NSL) and a generative denoising autoencoder to remove the adversarial perturbations. In addition, a randomized channel that adds calculated noise to the model parameter is utilized to encounter white-box and black-box attacks. The experimental results demonstrate that the proposed DSAA effectively mitigates proactive and reactive attacks compared to other existing defense methods, showcasing its performance by achieving an average accuracy of 80.15%.","PeriodicalId":13208,"journal":{"name":"IEEE Transactions on Consumer Electronics","volume":"70 3","pages":"6216-6223"},"PeriodicalIF":4.3000,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Harnessing Generative Modeling and Autoencoders Against Adversarial Threats in Autonomous Vehicles\",\"authors\":\"Kathiroli Raja;Sudhakar Theerthagiri;Sriram Venkataraman Swaminathan;Sivassri Suresh;Gunasekaran Raja\",\"doi\":\"10.1109/TCE.2024.3437419\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The safety and security of Autonomous Vehicles (AVs) have been an active area of interest and study in recent years. To enable human behavior, Deep Learning (DL) and Machine Learning (ML) models are extensively used to make accurate decisions. However, the DL and ML models are susceptible to various attacks, like adversarial attacks, leading to miscalculated decisions. Existing solutions defend against adversarial attacks proactively or reactively. To improve the defense methodologies, we propose a novel hybrid Defense Strategy for Autonomous Vehicles against Adversarial Attacks (DSAA), incorporating both reactive and proactive measures with adversarial training with Neural Structured Learning (NSL) and a generative denoising autoencoder to remove the adversarial perturbations. In addition, a randomized channel that adds calculated noise to the model parameter is utilized to encounter white-box and black-box attacks. The experimental results demonstrate that the proposed DSAA effectively mitigates proactive and reactive attacks compared to other existing defense methods, showcasing its performance by achieving an average accuracy of 80.15%.\",\"PeriodicalId\":13208,\"journal\":{\"name\":\"IEEE Transactions on Consumer Electronics\",\"volume\":\"70 3\",\"pages\":\"6216-6223\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Consumer Electronics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10623540/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Consumer Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10623540/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Harnessing Generative Modeling and Autoencoders Against Adversarial Threats in Autonomous Vehicles
The safety and security of Autonomous Vehicles (AVs) have been an active area of interest and study in recent years. To enable human behavior, Deep Learning (DL) and Machine Learning (ML) models are extensively used to make accurate decisions. However, the DL and ML models are susceptible to various attacks, like adversarial attacks, leading to miscalculated decisions. Existing solutions defend against adversarial attacks proactively or reactively. To improve the defense methodologies, we propose a novel hybrid Defense Strategy for Autonomous Vehicles against Adversarial Attacks (DSAA), incorporating both reactive and proactive measures with adversarial training with Neural Structured Learning (NSL) and a generative denoising autoencoder to remove the adversarial perturbations. In addition, a randomized channel that adds calculated noise to the model parameter is utilized to encounter white-box and black-box attacks. The experimental results demonstrate that the proposed DSAA effectively mitigates proactive and reactive attacks compared to other existing defense methods, showcasing its performance by achieving an average accuracy of 80.15%.
期刊介绍:
The main focus for the IEEE Transactions on Consumer Electronics is the engineering and research aspects of the theory, design, construction, manufacture or end use of mass market electronics, systems, software and services for consumers.