{"title":"人工智能和随机恐怖主义——应该这样做吗?","authors":"Bart Kemper","doi":"10.1109/ISSREW55968.2022.00091","DOIUrl":null,"url":null,"abstract":"The use of Artificial Intelligence and Machine Learning technology may seem to be the tools needed to combat media-inspired “lone wolf attacks” by implementing the concept of “stochastic terrorism,” targeting harmful media influences. Machine Learning is in current use to sort through social media data to assess hate speech. Artificial Intelligence is in current use to interpret the data and trends processed by Machine Learning for tasks such as finding criminal networks. The question becomes “can stochastic terrorism be proven” and “should this be implemented.” Labeling someone as a “terrorist,” regardless of any modifier for the term, tags the person or group for severe, potentially lethal, response by the government and the community. Criminal accusation cannot ethically be done casually or without sufficient cause. Due to documented problems with bias in all aspects of the issue, using these computational tools to establish legal causation between media statements by pundits, politicians, or others and the violence of “lone wolf” actors would not meet the requirements of US jurisprudence or the ethical principles for Artificial Intelligence of being explainable, transparent, and responsible.","PeriodicalId":178302,"journal":{"name":"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI and Stochastic Terrorism – Should it be done?\",\"authors\":\"Bart Kemper\",\"doi\":\"10.1109/ISSREW55968.2022.00091\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The use of Artificial Intelligence and Machine Learning technology may seem to be the tools needed to combat media-inspired “lone wolf attacks” by implementing the concept of “stochastic terrorism,” targeting harmful media influences. Machine Learning is in current use to sort through social media data to assess hate speech. Artificial Intelligence is in current use to interpret the data and trends processed by Machine Learning for tasks such as finding criminal networks. The question becomes “can stochastic terrorism be proven” and “should this be implemented.” Labeling someone as a “terrorist,” regardless of any modifier for the term, tags the person or group for severe, potentially lethal, response by the government and the community. Criminal accusation cannot ethically be done casually or without sufficient cause. Due to documented problems with bias in all aspects of the issue, using these computational tools to establish legal causation between media statements by pundits, politicians, or others and the violence of “lone wolf” actors would not meet the requirements of US jurisprudence or the ethical principles for Artificial Intelligence of being explainable, transparent, and responsible.\",\"PeriodicalId\":178302,\"journal\":{\"name\":\"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISSREW55968.2022.00091\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSREW55968.2022.00091","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The use of Artificial Intelligence and Machine Learning technology may seem to be the tools needed to combat media-inspired “lone wolf attacks” by implementing the concept of “stochastic terrorism,” targeting harmful media influences. Machine Learning is in current use to sort through social media data to assess hate speech. Artificial Intelligence is in current use to interpret the data and trends processed by Machine Learning for tasks such as finding criminal networks. The question becomes “can stochastic terrorism be proven” and “should this be implemented.” Labeling someone as a “terrorist,” regardless of any modifier for the term, tags the person or group for severe, potentially lethal, response by the government and the community. Criminal accusation cannot ethically be done casually or without sufficient cause. Due to documented problems with bias in all aspects of the issue, using these computational tools to establish legal causation between media statements by pundits, politicians, or others and the violence of “lone wolf” actors would not meet the requirements of US jurisprudence or the ethical principles for Artificial Intelligence of being explainable, transparent, and responsible.