Abdulaziz Alajaji, Walter Gerych, kar 2402565399 ku, Luke Buquicchio, E. Agu, E. Rundensteiner
{"title":"对抗性人类语境识别:逃避攻击和防御","authors":"Abdulaziz Alajaji, Walter Gerych, kar 2402565399 ku, Luke Buquicchio, E. Agu, E. Rundensteiner","doi":"10.1109/COMPSAC57700.2023.00036","DOIUrl":null,"url":null,"abstract":"Human Context Recognition (HCR) from smartphone sensor data is a crucial task for Context-Aware (CA) systems, such as those targeting the healthcare and security domains. HCR models deployed in the wild are susceptible to adversarial attacks, wherein an adversary perturbs input sensor values to cause malicious mis-classifications. In this study, we demonstrate evasion attacks that can be perpetuated during model inference, particularly input perturbations that are adversarially calibrated to fool classifiers. In contrast to white-box methods that require impractical levels of system access, black-box evasion attacks merely require the ability to query the model with arbitrary inputs. Specifically, we generate adversarial perturbations using only class confidence scores, as in the Zoo attack, or only class decisions, as in the HopSkipJump (HSJ) attack that correspond with plausible scenarios of possible adversarial attacks. We empirically demonstrate that sophisticated adversarial evasion attacks can significantly impair the accuracy of HCR models, resulting in a performance drop of up to 60% in f1-score. We also propose RobustHCR, an innovative framework for demonstrating and defending against black box evasion threats using a provable defense based on a duality-based network. RobustHCR is able to make reliable predictions regardless of whether its input is under attack or not, effectively mitigating the potential negative impacts caused by adversarial attacks. Rigorous evaluation on both scripted and in-the-wild smartphone HCR datasets demonstrates that RobustHCR can significantly improve the HCR model’s robustness and protect it from possible evasion attacks while maintaining acceptable performance on \"clean\" inputs. In particular, an HCR model with integrated RobustHCR defenses experienced an f1-score reduction of about 3% as opposed to a reduction of over 50% for an HCR model without a defense.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarial Human Context Recognition: Evasion Attacks and Defenses\",\"authors\":\"Abdulaziz Alajaji, Walter Gerych, kar 2402565399 ku, Luke Buquicchio, E. Agu, E. Rundensteiner\",\"doi\":\"10.1109/COMPSAC57700.2023.00036\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human Context Recognition (HCR) from smartphone sensor data is a crucial task for Context-Aware (CA) systems, such as those targeting the healthcare and security domains. HCR models deployed in the wild are susceptible to adversarial attacks, wherein an adversary perturbs input sensor values to cause malicious mis-classifications. In this study, we demonstrate evasion attacks that can be perpetuated during model inference, particularly input perturbations that are adversarially calibrated to fool classifiers. In contrast to white-box methods that require impractical levels of system access, black-box evasion attacks merely require the ability to query the model with arbitrary inputs. Specifically, we generate adversarial perturbations using only class confidence scores, as in the Zoo attack, or only class decisions, as in the HopSkipJump (HSJ) attack that correspond with plausible scenarios of possible adversarial attacks. We empirically demonstrate that sophisticated adversarial evasion attacks can significantly impair the accuracy of HCR models, resulting in a performance drop of up to 60% in f1-score. We also propose RobustHCR, an innovative framework for demonstrating and defending against black box evasion threats using a provable defense based on a duality-based network. RobustHCR is able to make reliable predictions regardless of whether its input is under attack or not, effectively mitigating the potential negative impacts caused by adversarial attacks. Rigorous evaluation on both scripted and in-the-wild smartphone HCR datasets demonstrates that RobustHCR can significantly improve the HCR model’s robustness and protect it from possible evasion attacks while maintaining acceptable performance on \\\"clean\\\" inputs. In particular, an HCR model with integrated RobustHCR defenses experienced an f1-score reduction of about 3% as opposed to a reduction of over 50% for an HCR model without a defense.\",\"PeriodicalId\":296288,\"journal\":{\"name\":\"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/COMPSAC57700.2023.00036\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COMPSAC57700.2023.00036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adversarial Human Context Recognition: Evasion Attacks and Defenses
Human Context Recognition (HCR) from smartphone sensor data is a crucial task for Context-Aware (CA) systems, such as those targeting the healthcare and security domains. HCR models deployed in the wild are susceptible to adversarial attacks, wherein an adversary perturbs input sensor values to cause malicious mis-classifications. In this study, we demonstrate evasion attacks that can be perpetuated during model inference, particularly input perturbations that are adversarially calibrated to fool classifiers. In contrast to white-box methods that require impractical levels of system access, black-box evasion attacks merely require the ability to query the model with arbitrary inputs. Specifically, we generate adversarial perturbations using only class confidence scores, as in the Zoo attack, or only class decisions, as in the HopSkipJump (HSJ) attack that correspond with plausible scenarios of possible adversarial attacks. We empirically demonstrate that sophisticated adversarial evasion attacks can significantly impair the accuracy of HCR models, resulting in a performance drop of up to 60% in f1-score. We also propose RobustHCR, an innovative framework for demonstrating and defending against black box evasion threats using a provable defense based on a duality-based network. RobustHCR is able to make reliable predictions regardless of whether its input is under attack or not, effectively mitigating the potential negative impacts caused by adversarial attacks. Rigorous evaluation on both scripted and in-the-wild smartphone HCR datasets demonstrates that RobustHCR can significantly improve the HCR model’s robustness and protect it from possible evasion attacks while maintaining acceptable performance on "clean" inputs. In particular, an HCR model with integrated RobustHCR defenses experienced an f1-score reduction of about 3% as opposed to a reduction of over 50% for an HCR model without a defense.