Explaining Sentiments: Improving Explainability in Sentiment Analysis Using Local Interpretable Model-Agnostic Explanations and Counterfactual Explanations
{"title":"Explaining Sentiments: Improving Explainability in Sentiment Analysis Using Local Interpretable Model-Agnostic Explanations and Counterfactual Explanations","authors":"Xin Wang;Jianhui Lyu;J. Dinesh Peter;Byung-Gyu Kim;B.D. Parameshachari;Keqin Li;Wei Wei","doi":"10.1109/TCSS.2025.3531718","DOIUrl":null,"url":null,"abstract":"Sentiment analysis of social media platforms is crucial for extracting actionable insights from unstructured textual data. However, modern sentiment analysis models using deep learning lack explainability, acting as black box and limiting trust. This study focuses on improving the explainability of sentiment analysis models of social media platforms by leveraging explainable artificial intelligence (XAI). We propose a novel explainable sentiment analysis (XSA) framework incorporating intrinsic and posthoc XAI methods, i.e., local interpretable model-agnostic explanations (LIME) and counterfactual explanations. Specifically, to solve the problem of lack of local fidelity and stability in interpretations caused by the LIME random perturbation sampling method, a new model-independent interpretation method is proposed, which uses the isometric mapping virtual sample generation method based on manifold learning instead of LIMEs random perturbation sampling method to generate samples. Additionally, a generative link tree is presented to create counterfactual explanations that maintain strong data fidelity, which constructs counterfactual narratives by leveraging examples from the training data, employing a divide-and-conquer strategy combined with local greedy. Experiments conducted on social media datasets from Twitter, YouTube comments, Yelp, and Amazon demonstrate XSAs ability to provide local aspect-level explanations while maintaining sentiment analysis performance. Analyses reveal improved model explainability and enhanced user trust, demonstrating XAIs potential in sentiment analysis of social media platforms. The proposed XSA framework provides a valuable direction for developing transparent and trustworthy sentiment analysis models for social media platforms.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1390-1403"},"PeriodicalIF":4.5000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Social Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10955494/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0
Abstract
Sentiment analysis of social media platforms is crucial for extracting actionable insights from unstructured textual data. However, modern sentiment analysis models using deep learning lack explainability, acting as black box and limiting trust. This study focuses on improving the explainability of sentiment analysis models of social media platforms by leveraging explainable artificial intelligence (XAI). We propose a novel explainable sentiment analysis (XSA) framework incorporating intrinsic and posthoc XAI methods, i.e., local interpretable model-agnostic explanations (LIME) and counterfactual explanations. Specifically, to solve the problem of lack of local fidelity and stability in interpretations caused by the LIME random perturbation sampling method, a new model-independent interpretation method is proposed, which uses the isometric mapping virtual sample generation method based on manifold learning instead of LIMEs random perturbation sampling method to generate samples. Additionally, a generative link tree is presented to create counterfactual explanations that maintain strong data fidelity, which constructs counterfactual narratives by leveraging examples from the training data, employing a divide-and-conquer strategy combined with local greedy. Experiments conducted on social media datasets from Twitter, YouTube comments, Yelp, and Amazon demonstrate XSAs ability to provide local aspect-level explanations while maintaining sentiment analysis performance. Analyses reveal improved model explainability and enhanced user trust, demonstrating XAIs potential in sentiment analysis of social media platforms. The proposed XSA framework provides a valuable direction for developing transparent and trustworthy sentiment analysis models for social media platforms.
期刊介绍:
IEEE Transactions on Computational Social Systems focuses on such topics as modeling, simulation, analysis and understanding of social systems from the quantitative and/or computational perspective. "Systems" include man-man, man-machine and machine-machine organizations and adversarial situations as well as social media structures and their dynamics. More specifically, the proposed transactions publishes articles on modeling the dynamics of social systems, methodologies for incorporating and representing socio-cultural and behavioral aspects in computational modeling, analysis of social system behavior and structure, and paradigms for social systems modeling and simulation. The journal also features articles on social network dynamics, social intelligence and cognition, social systems design and architectures, socio-cultural modeling and representation, and computational behavior modeling, and their applications.