Salah-ud-din Farooq, M. Usama, Junaid Qadir, M. Imran
{"title":"自组织蜂窝网络的对抗性ML攻击","authors":"Salah-ud-din Farooq, M. Usama, Junaid Qadir, M. Imran","doi":"10.1109/UCET.2019.8881842","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks (DNN) have been widely adopted in self-organizing networks (SON) for automating different networking tasks. Recently, it has been shown that DNN lack robustness against adversarial examples where an adversary can fool the DNN model into incorrect classification by introducing a small imperceptible perturbation to the original example. SON is expected to use DNN for multiple fundamental cellular tasks and many DNN-based solutions for performing SON tasks have been proposed in the literature have not been tested against adversarial examples. In this paper, we have tested and explained the robustness of SON against adversarial example and investigated the performance of an important SON use case in the face of adversarial attacks. We have also generated explanations of incorrect classifications by utilizing an explainable artificial intelligence (AI) technique.","PeriodicalId":169373,"journal":{"name":"2019 UK/ China Emerging Technologies (UCET)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Adversarial ML Attack on Self Organizing Cellular Networks\",\"authors\":\"Salah-ud-din Farooq, M. Usama, Junaid Qadir, M. Imran\",\"doi\":\"10.1109/UCET.2019.8881842\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Neural Networks (DNN) have been widely adopted in self-organizing networks (SON) for automating different networking tasks. Recently, it has been shown that DNN lack robustness against adversarial examples where an adversary can fool the DNN model into incorrect classification by introducing a small imperceptible perturbation to the original example. SON is expected to use DNN for multiple fundamental cellular tasks and many DNN-based solutions for performing SON tasks have been proposed in the literature have not been tested against adversarial examples. In this paper, we have tested and explained the robustness of SON against adversarial example and investigated the performance of an important SON use case in the face of adversarial attacks. We have also generated explanations of incorrect classifications by utilizing an explainable artificial intelligence (AI) technique.\",\"PeriodicalId\":169373,\"journal\":{\"name\":\"2019 UK/ China Emerging Technologies (UCET)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 UK/ China Emerging Technologies (UCET)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/UCET.2019.8881842\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 UK/ China Emerging Technologies (UCET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UCET.2019.8881842","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adversarial ML Attack on Self Organizing Cellular Networks
Deep Neural Networks (DNN) have been widely adopted in self-organizing networks (SON) for automating different networking tasks. Recently, it has been shown that DNN lack robustness against adversarial examples where an adversary can fool the DNN model into incorrect classification by introducing a small imperceptible perturbation to the original example. SON is expected to use DNN for multiple fundamental cellular tasks and many DNN-based solutions for performing SON tasks have been proposed in the literature have not been tested against adversarial examples. In this paper, we have tested and explained the robustness of SON against adversarial example and investigated the performance of an important SON use case in the face of adversarial attacks. We have also generated explanations of incorrect classifications by utilizing an explainable artificial intelligence (AI) technique.