Josh McGiff, W. G. Hatcher, James H. Nguyen, Wei Yu, Erik Blasch, Chao Lu
{"title":"面向Android恶意软件检测的多模式学习","authors":"Josh McGiff, W. G. Hatcher, James H. Nguyen, Wei Yu, Erik Blasch, Chao Lu","doi":"10.1109/ICCNC.2019.8685502","DOIUrl":null,"url":null,"abstract":"As the worldwide adoption of smartphones nears complete saturation, the complexity and volume of mobile malware continue to progress, subverting what has become the largest set of targets with the most valuable rewards. At the same time, deep learning has become a powerful tool for data analysis and prediction, demonstrating significant efficacy in the detection of new and unforeseen malicious software through supervised learning. Despite these advances, the proper application of deep learning to malware detection could be further improved through additional data and model construction. In this work, we consider the detection of Android malware using static analysis techniques on multiple extractable data classes. Particularly, both Permission and Hardware Feature data are applied in various multimodal input scenarios and deep network shapes. Through experimental analysis, we demonstrate that the combination of both sets of data could improve overall performance, achieving as high as 94.5% classification accuracy. In addition, using only a limited grid search, we observe that the largest multimodal network requires the lowest time to train while achieving equivalent or greater accuracy compared with our other models.","PeriodicalId":161815,"journal":{"name":"2019 International Conference on Computing, Networking and Communications (ICNC)","volume":"32 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":"{\"title\":\"Towards Multimodal Learning for Android Malware Detection\",\"authors\":\"Josh McGiff, W. G. Hatcher, James H. Nguyen, Wei Yu, Erik Blasch, Chao Lu\",\"doi\":\"10.1109/ICCNC.2019.8685502\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As the worldwide adoption of smartphones nears complete saturation, the complexity and volume of mobile malware continue to progress, subverting what has become the largest set of targets with the most valuable rewards. At the same time, deep learning has become a powerful tool for data analysis and prediction, demonstrating significant efficacy in the detection of new and unforeseen malicious software through supervised learning. Despite these advances, the proper application of deep learning to malware detection could be further improved through additional data and model construction. In this work, we consider the detection of Android malware using static analysis techniques on multiple extractable data classes. Particularly, both Permission and Hardware Feature data are applied in various multimodal input scenarios and deep network shapes. Through experimental analysis, we demonstrate that the combination of both sets of data could improve overall performance, achieving as high as 94.5% classification accuracy. In addition, using only a limited grid search, we observe that the largest multimodal network requires the lowest time to train while achieving equivalent or greater accuracy compared with our other models.\",\"PeriodicalId\":161815,\"journal\":{\"name\":\"2019 International Conference on Computing, Networking and Communications (ICNC)\",\"volume\":\"32 3\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"23\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on Computing, Networking and Communications (ICNC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCNC.2019.8685502\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Computing, Networking and Communications (ICNC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCNC.2019.8685502","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards Multimodal Learning for Android Malware Detection
As the worldwide adoption of smartphones nears complete saturation, the complexity and volume of mobile malware continue to progress, subverting what has become the largest set of targets with the most valuable rewards. At the same time, deep learning has become a powerful tool for data analysis and prediction, demonstrating significant efficacy in the detection of new and unforeseen malicious software through supervised learning. Despite these advances, the proper application of deep learning to malware detection could be further improved through additional data and model construction. In this work, we consider the detection of Android malware using static analysis techniques on multiple extractable data classes. Particularly, both Permission and Hardware Feature data are applied in various multimodal input scenarios and deep network shapes. Through experimental analysis, we demonstrate that the combination of both sets of data could improve overall performance, achieving as high as 94.5% classification accuracy. In addition, using only a limited grid search, we observe that the largest multimodal network requires the lowest time to train while achieving equivalent or greater accuracy compared with our other models.