{"title":"Aspect-enhanced Explainable Recommendation with Multi-modal Contrastive Learning","authors":"Hao Liao, Shuo Wang, Hao Cheng, Wei Zhang, Jiwei Zhang, Mingyang Zhou, Kezhong Lu, Rui Mao, Xing Xie","doi":"10.1145/3673234","DOIUrl":"https://doi.org/10.1145/3673234","url":null,"abstract":"<p>Explainable recommender systems (<b>ERS</b>) aim to enhance users’ trust in the systems by offering personalized recommendations with transparent explanations. This transparency provides users with a clear understanding of the rationale behind the recommendations, fostering a sense of confidence and reliability in the system’s outputs. Generally, the explanations are presented in a familiar and intuitive way, which is in the form of natural language, thus enhancing their accessibility to users. Recently, there has been an increasing focus on leveraging reviews as a valuable source of rich information in both modeling user-item preferences and generating textual interpretations, which can be performed simultaneously in a multi-task framework. Despite the progress made in these review-based recommendation systems, the integration of implicit feedback derived from user-item interactions and user-written text reviews has yet to be fully explored. To fill this gap, we propose a model named <b>SERMON</b> (A<b><underline>s</underline></b>pect-enhanced <b><underline>E</underline></b>xplainable <b><underline>R</underline></b>ecommendation with <b><underline>M</underline></b>ulti-modal C<b><underline>o</underline></b>ntrast Lear<b><underline>n</underline></b>ing). Our model explores the application of multimodal contrastive learning to facilitate reciprocal learning across two modalities, thereby enhancing the modeling of user preferences. Moreover, our model incorporates the aspect information extracted from the review, which provides two significant enhancements to our tasks. Firstly, the quality of the generated explanations is improved by incorporating the aspect characteristics into the explanations generated by a pre-trained model with controlled textual generation ability. Secondly, the commonly used user-item interactions are transformed into user-item-aspect interactions, which we refer to as interaction triple, resulting in a more nuanced representation of user preference. To validate the effectiveness of our model, we conduct extensive experiments on three real-world datasets. The experimental results show that our model outperforms state-of-the-art baselines, with a 2.0% improvement in prediction accuracy and a substantial 24.5% enhancement in explanation quality for the TripAdvisor dataset.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":"114 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Explaining Neural News Recommendation with Attributions onto Reading Histories","authors":"Lucas Möller, Sebastian Padó","doi":"10.1145/3673233","DOIUrl":"https://doi.org/10.1145/3673233","url":null,"abstract":"<p>An important aspect of responsible recommendation systems is the transparency of the prediction mechanisms. This is a general challenge for deep-learning-based systems such as the currently predominant neural news recommender architectures which are optimized to predict clicks by matching candidate news items against users’ reading histories. Such systems achieve state-of-the-art click-prediction performance, but the rationale for their decisions is difficult to assess. At the same time, the economic and societal impact of these systems makes such insights very much desirable.</p><p>In this paper, we ask the question to what extent the recommendations of current news recommender systems are actually based on content-related evidence from reading histories. We approach this question from an explainability perspective. Building on the concept of integrated gradients, we present a neural news recommender that can accurately attribute individual recommendations to news items and words in input reading histories while maintaining a top scoring click-prediction performance.</p><p>Using our method as a diagnostic tool, we find that: (a), a substantial number of users’ clicks on news are not explainable from reading histories, and many history-explainable items are actually skipped; (b), while many recommendations are based on content-related evidence in histories, for others the model does not attend to reasonable evidence, and recommendations stem from a spurious bias in user representations. Our code is publicly available<sup>1</sup>.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":"145 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qin Ni, Yangze Yu, Yiming Ma, Xin Lin, Ciping Deng, Tingjiang Wei, Mo Xuan
{"title":"The Social Cognition Ability Evaluation of LLMs: A Dynamic Gamified Assessment and Hierarchical Social Learning Measurement Approach","authors":"Qin Ni, Yangze Yu, Yiming Ma, Xin Lin, Ciping Deng, Tingjiang Wei, Mo Xuan","doi":"10.1145/3673238","DOIUrl":"https://doi.org/10.1145/3673238","url":null,"abstract":"<p>Large Language Model(LLM) has shown amazing abilities in reasoning tasks, theory of mind(ToM) has been tested in many studies as part of reasoning tasks, and social learning, which is closely related to theory of mind, are still lack of investigation. However, the test methods and materials make the test results unconvincing. We propose a dynamic gamified assessment(DGA) and hierarchical social learning measurement to test ToM and social learning capacities in LLMs. The test for ToM consists of five parts. First, we extract ToM tasks from ToM experiments and then design game rules to satisfy the ToM task requirement. After that, we design ToM questions to match the game’s rules and use these to generate test materials. Finally, we go through the above steps to test the model. To assess the social learning ability, we introduce a novel set of social rules (three in total). Experiment results demonstrate that, except GPT-4, LLMs performed poorly on the ToM test but showed a certain level of social learning ability in social learning measurement.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":"90 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter Carragher, Evan M. Williams, Kathleen M. Carley
{"title":"Misinformation Resilient Search Rankings with Webgraph-based Interventions","authors":"Peter Carragher, Evan M. Williams, Kathleen M. Carley","doi":"10.1145/3670410","DOIUrl":"https://doi.org/10.1145/3670410","url":null,"abstract":"<p>The proliferation of unreliable news domains on the internet has had wide-reaching negative impacts on society. We introduce and evaluate interventions aimed at reducing traffic to unreliable news domains from search engines while maintaining traffic to reliable domains. We build these interventions on the principles of fairness (penalize sites for what is in their control), generality (label/fact-check agnostic), targeted (increase the cost of adversarial behavior), and scalability (works at webscale). We refine our methods on small-scale webdata as a testbed and then generalize the interventions to a large-scale webgraph containing 93.9M domains and 1.6B edges. We demonstrate that our methods penalize unreliable domains far more than reliable domains in both settings and we explore multiple avenues to mitigate unintended effects on both the small-scale and large-scale webgraph experiments. These results indicate the potential of our approach to reduce the spread of misinformation and foster a more reliable online information ecosystem. This research contributes to the development of targeted strategies to enhance the trustworthiness and quality of search engine results, ultimately benefiting users and the broader digital community.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":"26 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Privacy-Preserving and Diversity-Aware Trust-based Team Formation in Online Social Networks","authors":"Yash Mahajan, Jin-Hee Cho, Ing-Ray Chen","doi":"10.1145/3670411","DOIUrl":"https://doi.org/10.1145/3670411","url":null,"abstract":"<p>As online social networks (OSNs) become more prevalent, a new paradigm for problem-solving through crowd-sourcing has emerged. By leveraging the OSN platforms, users can post a problem to be solved and then form a team to collaborate and solve the problem. A common concern in OSNs is how to form effective collaborative teams, as various tasks are completed through online collaborative networks. A team’s diversity in expertise has received high attention to producing high team performance in developing team formation (TF) algorithms. However, the effect of team diversity on performance under different types of tasks has not been extensively studied. Another important issue is how to balance the need to preserve individuals’ privacy with the need to maximize performance through active collaboration, as these two goals may conflict with each other. This research has not been actively studied in the literature. In this work, we develop a team formation (TF) algorithm in the context of OSNs that can maximize team performance and preserve team members’ privacy under different types of tasks. Our proposed <underline>PR</underline>iv<underline>A</underline>cy-<underline>D</underline>iversity-<underline>A</underline>ware <underline>T</underline>eam <underline>F</underline>ormation framework, called <monospace>PRADA-TF</monospace>, is based on trust relationships between users in OSNs where trust is measured based on a user’s expertise and privacy preference levels. The PRADA-TF algorithm considers the team members’ domain expertise, privacy preferences, and the team’s expertise diversity in the process of team formation. Our approach employs game-theoretic principles <i>Mechanism Design</i> to motivate self-interested individuals within a team formation context, positioning the mechanism designer as the pivotal team leader responsible for assembling the team. We use two real-world datasets (i.e., Netscience and IMDb) to generate different semi-synthetic datasets for constructing trust networks using a belief model (i.e., Subjective Logic) and identifying trustworthy users as candidate team members. We evaluate the effectiveness of our proposed <monospace>PRADA-TF</monospace> scheme in four variants against three baseline methods in the literature. Our analysis focuses on three performance metrics for studying OSNs: social welfare, privacy loss, and team diversity.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":"8 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141256078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moshe Levy, Guy Amit, Yuval Elovici, Yisroel Mirsky
{"title":"Ranking the Transferability of Adversarial Examples","authors":"Moshe Levy, Guy Amit, Yuval Elovici, Yisroel Mirsky","doi":"10.1145/3670409","DOIUrl":"https://doi.org/10.1145/3670409","url":null,"abstract":"<p>Adversarial transferability in blackbox scenarios presents a unique challenge: while attackers can employ surrogate models to craft adversarial examples, they lack assurance on whether these examples will successfully compromise the target model. Until now, the prevalent method to ascertain success has been trial and error—testing crafted samples directly on the victim model. This approach, however, risks detection with every attempt, forcing attackers to either perfect their first try or face exposure.</p><p>Our paper introduces a ranking strategy that refines the transfer attack process, enabling the attacker to estimate the likelihood of success without repeated trials on the victim’s system. By leveraging a set of diverse surrogate models, our method can predict transferability of adversarial examples. This strategy can be used to either select the best sample to use in an attack or the best perturbation to apply to a specific sample.</p><p>Using our strategy, we were able to raise the transferability of adversarial examples from a mere 20%—akin to random selection—up to near upper-bound levels, with some scenarios even witnessing a 100% success rate. This substantial improvement not only sheds light on the shared susceptibilities across diverse architectures but also demonstrates that attackers can forego the detectable trial-and-error tactics raising increasing the threat of surrogate-based attacks.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":"71 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141256351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DNSRF: Deep Network-based Semi-NMF Representation Framework","authors":"Dexian Wang, Tianrui Li, Ping Deng, Zhipeng Luo, Pengfei Zhang, Keyu Liu, Wei Huang","doi":"10.1145/3670408","DOIUrl":"https://doi.org/10.1145/3670408","url":null,"abstract":"<p>Representation learning is an important topic in machine learning, pattern recognition, and data mining research. Among many representation learning approaches, semi-nonnegative matrix factorization (SNMF) is a frequently-used one. However, a typical problem of SNMF is that usually there is no learning rate guidance during the optimization process, which often leads to a poor representation ability. To overcome this limitation, we propose a very general representation learning framework (DNSRF) that is based on deep neural net. Essentially, the parameters of the deep net used to construct the DNSRF algorithms are obtained by matrix element update. In combination with different activation functions, DNSRF can be implemented in various ways. In our experiments, we tested nine instances of our DNSRF framework on six benchmark datasets. In comparison with other state-of-the-art methods, the results demonstrate superior performance of our framework, which is thus shown to have a great representation ability.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":"53 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141256071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haojie Zhuang, Wei Zhang, Weitong Chen, Jian Yang, Quan Z. Sheng
{"title":"Improving Faithfulness and Factuality with Contrastive Learning in Explainable Recommendation","authors":"Haojie Zhuang, Wei Zhang, Weitong Chen, Jian Yang, Quan Z. Sheng","doi":"10.1145/3653984","DOIUrl":"https://doi.org/10.1145/3653984","url":null,"abstract":"<p>Recommender systems have become increasingly important in navigating the vast amount of information and options available in various domains. By tailoring and personalizing recommendations to user preferences and interests, these systems improve the user experience, efficiency and satisfaction. With a growing demand for transparency and understanding of recommendation outputs, explainable recommender systems have gained growing attention in recent years. Additionally, as user reviews could be considered the rationales behind why the user likes (or dislikes) the products, generating informative and reliable reviews alongside recommendations has thus emerged as a research focus in explainable recommendation. However, the model-generated reviews might contain factual inconsistent contents (i.e., the hallucination issue), which would thus compromise the recommendation rationales. To address this issue, we propose a contrastive learning framework to improve the faithfulness and factuality in explainable recommendation in this paper. We further develop different strategies of generating positive and negative examples for contrastive learning, such as back-translation or synonym substitution for positive examples, and editing positive examples or utilizing model-generated texts for negative examples. Our proposed method optimizes the model to distinguish faithful explanations (i.e., positive examples) and unfaithful ones with factual errors (i.e., negative examples), which thus drives the model to generate faithful reviews as explanations while avoiding inconsistent contents. Extensive experiments and analysis on three benchmark datasets show that our proposed model outperforms other review generation baselines in faithfulness and factuality. In addition, the proposed contrastive learning component could be easily incorporated into other explainable recommender systems in a plug-and-play manner.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":"24 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141146457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Federated Social Recommendation Approach with Enhanced Hypergraph Neural Network","authors":"Hongliang Sun, Zhiying Tu, Dianbo Sui, Bolin Zhang, Xiaofei Xu","doi":"10.1145/3665931","DOIUrl":"https://doi.org/10.1145/3665931","url":null,"abstract":"<p>In recent years, the development of online social network platforms has led to increased research efforts in social recommendation systems. Unlike traditional recommendation systems, social recommendation systems utilize both user-item interactions and user-user social relations to recommend relevant items, taking into account social homophily and social influence. Graph neural network (GNN) based social recommendation methods have been proposed to model these item interactions and social relations effectively. However, existing GNN-based methods rely on centralized training, which raises privacy concerns and faces challenges in data collection due to regulations and privacy restrictions. Federated learning has emerged as a privacy-preserving alternative. Combining federated learning with GNN-based methods for social recommendation can leverage their respective advantages, but it also introduces new challenges: 1) existing federated recommendation systems often lack the capability to process heterogeneous data, such as user-item interactions and social relations; 2) due to the sparsity of data distributed across different clients, capturing the higher-order relationship information among users becomes challenging and is often overlooked by most federated recommendation systems. To overcome these challenges, we propose a federated social recommendation approach with enhanced hypergraph neural network. We introduce hypergraph graph neural networks (HGNN) to learn user and item embeddings in federated recommendation systems, leveraging the hypergraph structure to address the heterogeneity of data. Based on carefully crafted triangular motifs, we merge user and item nodes to construct hypergraphs on local clients, capturing specific triangular relations. Multiple HGNN channels are used to encode different categories of high-order relations, and an attention mechanism is applied to aggregate the embedded information from these channels. Our experiments on real-world social recommendation datasets demonstrate the effectiveness of the proposed approach. Extensive experiment results on three publicly available datasets validate the effectiveness of the proposed method.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":"30 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141146469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fairness and Diversity in Recommender Systems: A Survey","authors":"Yuying Zhao, Yu Wang, Yunchao Liu, Xueqi Cheng, Charu C. Aggarwal, Tyler Derr","doi":"10.1145/3664928","DOIUrl":"https://doi.org/10.1145/3664928","url":null,"abstract":"<p>Recommender systems (RS) are effective tools for mitigating information overload and have seen extensive applications across various domains. However, the single focus on utility goals proves to be inadequate in addressing real-world concerns, leading to increasing attention to fairness-aware and diversity-aware RS. While most existing studies explore fairness and diversity independently, we identify strong connections between these two domains. In this survey, we first discuss each of them individually and then dive into their connections. Additionally, motivated by the concepts of user-level and item-level fairness, we broaden the understanding of diversity to encompass not only the item level but also the user level. With this expanded perspective on user and item-level diversity, we re-interpret fairness studies from the viewpoint of diversity. This fresh perspective enhances our understanding of fairness-related work and paves the way for potential future research directions. Papers discussed in this survey along with public code links are available at: https://github.com/YuyingZhao/Awesome-Fairness-and-Diversity-Papers-in-Recommender-Systems.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":"32 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141146512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}