{"title":"Toward robust systems against sensor-based adversarial examples based on the criticalities of sensors.","authors":"Ade Kurniawan, Y. Ohsita, Masayuki Murata","doi":"10.1109/ICAIC60265.2024.10433806","DOIUrl":null,"url":null,"abstract":"In multi-sensor systems, certain sensors could have vulnerabilities that may be exploited to produce AEs. However, it is difficult to protect all sensor devices, because the risk of the existence of vulnerable sensor devices increases as the number of sensor devices increases. Therefore, we need a method to protect ML models even if a part of the sensors are compromised by the attacker. One approach is to detect the sensors used by the attacks and remove the detected sensors. However, such reactive defense method has limitations. If some critical sensors that are necessary to distinguish required states are compromised by the attacker, we cannot obtain the suitable output. In this paper, we discuss a strategy to make the system robust against AEs proactively. A system with enough redundancy can work after removing the features from the sensors used in the AEs. That is, we need a metric to check if the system has enough redundancy. In this paper, we define groups of sensors that might be compromised by the same attacker, and we propose a metric called criticality that indicates how important each group of sensors are for classification between two classes. Based on the criticality, we can make the system robust against sensor-based AEs by interactively adding sensors so as to decrease the criticality of any groups of sensors for the classes that must be distinguished.","PeriodicalId":517265,"journal":{"name":"2024 IEEE 3rd International Conference on AI in Cybersecurity (ICAIC)","volume":"26 5","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 IEEE 3rd International Conference on AI in Cybersecurity (ICAIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAIC60265.2024.10433806","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In multi-sensor systems, certain sensors could have vulnerabilities that may be exploited to produce AEs. However, it is difficult to protect all sensor devices, because the risk of the existence of vulnerable sensor devices increases as the number of sensor devices increases. Therefore, we need a method to protect ML models even if a part of the sensors are compromised by the attacker. One approach is to detect the sensors used by the attacks and remove the detected sensors. However, such reactive defense method has limitations. If some critical sensors that are necessary to distinguish required states are compromised by the attacker, we cannot obtain the suitable output. In this paper, we discuss a strategy to make the system robust against AEs proactively. A system with enough redundancy can work after removing the features from the sensors used in the AEs. That is, we need a metric to check if the system has enough redundancy. In this paper, we define groups of sensors that might be compromised by the same attacker, and we propose a metric called criticality that indicates how important each group of sensors are for classification between two classes. Based on the criticality, we can make the system robust against sensor-based AEs by interactively adding sensors so as to decrease the criticality of any groups of sensors for the classes that must be distinguished.