{"title":"Personalized Explainable Recommendations for Self-Attention Collaboration","authors":"Yongfu Zha, Xuanxuan Che, Lina Sun, Yumin Dong","doi":"10.1145/3657636","DOIUrl":null,"url":null,"abstract":"<p>In recommender systems, providing reasonable explanations can enhance users’ comprehension of recommended results. Template-based explainable recommendation heavily relies on pre-defined templates, constraining the expressiveness of generated sentences and resulting in low-quality explanations. Recently, a novel approach was introduced, utilizing embedding representations of items and comments to address the issue of user IDs and item IDs not residing in the same semantic space as words, thus attributing linguistic meaning to IDs. However, these models often fail to fully exploit collaborative information within the data. In personalized recommendation and explanation processes, understanding the user’s emotional feedback and feature preferences is paramount. To address this, we propose a personalized explainable recommendation model based on self-attention collaboration. Initially, the model employs an attention network to amalgamate the user’s historical interaction feature preferences with their user ID information, while simultaneously integrating all feature information of the item with its item ID to enhance semantic ID representation. Subsequently, the model incorporates the user’s comment feature rhetoric and sentiment feedback to generate more personalized recommendation explanations utilizing a self-attention network. Experimental evaluations conducted on two datasets of varying scales demonstrate the superiority of our model over current state-of-the-art approaches, validating its effectiveness.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"58 1","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Asian and Low-Resource Language Information Processing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3657636","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In recommender systems, providing reasonable explanations can enhance users’ comprehension of recommended results. Template-based explainable recommendation heavily relies on pre-defined templates, constraining the expressiveness of generated sentences and resulting in low-quality explanations. Recently, a novel approach was introduced, utilizing embedding representations of items and comments to address the issue of user IDs and item IDs not residing in the same semantic space as words, thus attributing linguistic meaning to IDs. However, these models often fail to fully exploit collaborative information within the data. In personalized recommendation and explanation processes, understanding the user’s emotional feedback and feature preferences is paramount. To address this, we propose a personalized explainable recommendation model based on self-attention collaboration. Initially, the model employs an attention network to amalgamate the user’s historical interaction feature preferences with their user ID information, while simultaneously integrating all feature information of the item with its item ID to enhance semantic ID representation. Subsequently, the model incorporates the user’s comment feature rhetoric and sentiment feedback to generate more personalized recommendation explanations utilizing a self-attention network. Experimental evaluations conducted on two datasets of varying scales demonstrate the superiority of our model over current state-of-the-art approaches, validating its effectiveness.
在推荐系统中,提供合理的解释可以增强用户对推荐结果的理解。基于模板的可解释推荐严重依赖预定义模板,限制了生成句子的表达能力,导致解释质量低下。最近,有人提出了一种新方法,利用项目和评论的嵌入表示法来解决用户 ID 和项目 ID 与单词不在同一语义空间的问题,从而为 ID 赋予语言意义。然而,这些模型往往无法充分利用数据中的协作信息。在个性化推荐和解释过程中,了解用户的情感反馈和特征偏好至关重要。为此,我们提出了一种基于自我注意力协作的个性化可解释推荐模型。首先,该模型利用注意力网络将用户的历史交互特征偏好与其用户 ID 信息整合在一起,同时将物品的所有特征信息与其物品 ID 整合在一起,以增强语义 ID 表示。随后,该模型结合用户的评论特征修辞和情感反馈,利用自我关注网络生成更加个性化的推荐解释。在两个不同规模的数据集上进行的实验评估表明,我们的模型优于目前最先进的方法,验证了其有效性。
期刊介绍:
The ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP) publishes high quality original archival papers and technical notes in the areas of computation and processing of information in Asian languages, low-resource languages of Africa, Australasia, Oceania and the Americas, as well as related disciplines. The subject areas covered by TALLIP include, but are not limited to:
-Computational Linguistics: including computational phonology, computational morphology, computational syntax (e.g. parsing), computational semantics, computational pragmatics, etc.
-Linguistic Resources: including computational lexicography, terminology, electronic dictionaries, cross-lingual dictionaries, electronic thesauri, etc.
-Hardware and software algorithms and tools for Asian or low-resource language processing, e.g., handwritten character recognition.
-Information Understanding: including text understanding, speech understanding, character recognition, discourse processing, dialogue systems, etc.
-Machine Translation involving Asian or low-resource languages.
-Information Retrieval: including natural language processing (NLP) for concept-based indexing, natural language query interfaces, semantic relevance judgments, etc.
-Information Extraction and Filtering: including automatic abstraction, user profiling, etc.
-Speech processing: including text-to-speech synthesis and automatic speech recognition.
-Multimedia Asian Information Processing: including speech, image, video, image/text translation, etc.
-Cross-lingual information processing involving Asian or low-resource languages.
-Papers that deal in theory, systems design, evaluation and applications in the aforesaid subjects are appropriate for TALLIP. Emphasis will be placed on the originality and the practical significance of the reported research.