Jose N. Paredes , Juan Carlos L. Teze , Maria Vanina Martinez , Gerardo I. Simari
{"title":"用于实现基于xai的社会技术系统的HEIC应用框架","authors":"Jose N. Paredes , Juan Carlos L. Teze , Maria Vanina Martinez , Gerardo I. Simari","doi":"10.1016/j.osnem.2022.100239","DOIUrl":null,"url":null,"abstract":"<div><p><span><span>The development of data-driven Artificial Intelligence<span> systems has seen successful application in diverse domains related to social platforms; however, many of these systems cannot explain the rationale behind their decisions. This is a major drawback, especially in critical domains such as those related to cybersecurity, of which malicious behavior on social platforms is a clear example. In light of this problem, in this paper we make several contributions: (i) a proposal of desiderata for the explanation of outputs generated by AI-based cybersecurity systems; (ii) a review of approaches in the literature on </span></span>Explainable AI (XAI) under the lens of both our desiderata and further dimensions that are typically used for examining XAI approaches; (iii) the </span><em>Hybrid Explainable and Interpretable Cybersecurity</em><span> (HEIC) application framework that can serve as a roadmap for guiding R&D efforts towards XAI-based socio-technical systems; (iv) an example instantiation of the proposed framework in a news recommendation setting, where a portion of news articles are assumed to be fake news; and (v) exploration of various types of explanations that can help different kinds of users to identify real vs. fake news in social platform settings.</span></p></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"32 ","pages":"Article 100239"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"The HEIC application framework for implementing XAI-based socio-technical systems\",\"authors\":\"Jose N. Paredes , Juan Carlos L. Teze , Maria Vanina Martinez , Gerardo I. Simari\",\"doi\":\"10.1016/j.osnem.2022.100239\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p><span><span>The development of data-driven Artificial Intelligence<span> systems has seen successful application in diverse domains related to social platforms; however, many of these systems cannot explain the rationale behind their decisions. This is a major drawback, especially in critical domains such as those related to cybersecurity, of which malicious behavior on social platforms is a clear example. In light of this problem, in this paper we make several contributions: (i) a proposal of desiderata for the explanation of outputs generated by AI-based cybersecurity systems; (ii) a review of approaches in the literature on </span></span>Explainable AI (XAI) under the lens of both our desiderata and further dimensions that are typically used for examining XAI approaches; (iii) the </span><em>Hybrid Explainable and Interpretable Cybersecurity</em><span> (HEIC) application framework that can serve as a roadmap for guiding R&D efforts towards XAI-based socio-technical systems; (iv) an example instantiation of the proposed framework in a news recommendation setting, where a portion of news articles are assumed to be fake news; and (v) exploration of various types of explanations that can help different kinds of users to identify real vs. fake news in social platform settings.</span></p></div>\",\"PeriodicalId\":52228,\"journal\":{\"name\":\"Online Social Networks and Media\",\"volume\":\"32 \",\"pages\":\"Article 100239\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Online Social Networks and Media\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2468696422000416\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Online Social Networks and Media","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468696422000416","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
The HEIC application framework for implementing XAI-based socio-technical systems
The development of data-driven Artificial Intelligence systems has seen successful application in diverse domains related to social platforms; however, many of these systems cannot explain the rationale behind their decisions. This is a major drawback, especially in critical domains such as those related to cybersecurity, of which malicious behavior on social platforms is a clear example. In light of this problem, in this paper we make several contributions: (i) a proposal of desiderata for the explanation of outputs generated by AI-based cybersecurity systems; (ii) a review of approaches in the literature on Explainable AI (XAI) under the lens of both our desiderata and further dimensions that are typically used for examining XAI approaches; (iii) the Hybrid Explainable and Interpretable Cybersecurity (HEIC) application framework that can serve as a roadmap for guiding R&D efforts towards XAI-based socio-technical systems; (iv) an example instantiation of the proposed framework in a news recommendation setting, where a portion of news articles are assumed to be fake news; and (v) exploration of various types of explanations that can help different kinds of users to identify real vs. fake news in social platform settings.