Anna Rösner, A. Gegov, D. Ouelhadj, A. Hopgood, Serge Da Deppo
{"title":"Neural Network Based Prediction of Terrorist Attacks Using Explainable Artificial Intelligence","authors":"Anna Rösner, A. Gegov, D. Ouelhadj, A. Hopgood, Serge Da Deppo","doi":"10.1109/CAI54212.2023.00089","DOIUrl":null,"url":null,"abstract":"Al has transformed the field of terrorism prediction, allowing law enforcement agencies to identify potential threats much more quickly and accurately. This paper proposes a first-time application of a neural network to predict the \"success\" of a terrorist attack. The neural network attains an accuracy of 91.66% and an F1 score of 0.954. This accuracy and F1 score are higher than those achieved with alternative benchmark models. However, using Al for predictions in highstakes decisions also has limitations, including possible biases and ethical concerns. Therefore, the explainable Al (XAI) tool LIME is used to provide more insights into the algorithm's inner workings.","PeriodicalId":129324,"journal":{"name":"2023 IEEE Conference on Artificial Intelligence (CAI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Conference on Artificial Intelligence (CAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAI54212.2023.00089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Al has transformed the field of terrorism prediction, allowing law enforcement agencies to identify potential threats much more quickly and accurately. This paper proposes a first-time application of a neural network to predict the "success" of a terrorist attack. The neural network attains an accuracy of 91.66% and an F1 score of 0.954. This accuracy and F1 score are higher than those achieved with alternative benchmark models. However, using Al for predictions in highstakes decisions also has limitations, including possible biases and ethical concerns. Therefore, the explainable Al (XAI) tool LIME is used to provide more insights into the algorithm's inner workings.