World Wide Web最新文献

筛选
英文 中文
Community aware graph embedding learning for item recommendation 针对项目推荐的社区感知图嵌入式学习
World Wide Web Pub Date : 2023-12-07 DOI: 10.1007/s11280-023-01224-5
Pengyi Hao, Zhaojie Qian, Shuang Wang, Cong Bai
{"title":"Community aware graph embedding learning for item recommendation","authors":"Pengyi Hao, Zhaojie Qian, Shuang Wang, Cong Bai","doi":"10.1007/s11280-023-01224-5","DOIUrl":"https://doi.org/10.1007/s11280-023-01224-5","url":null,"abstract":"<p>Due to the heterogeneity of a large amount of real-world data, meta-paths are widely used in recommendation. Such recommendation methods can represent composite relationships between entities, but cannot explore reliable relations between nodes and influence among meta-paths. For solving this problem, a <b>C</b>ommunity <b>A</b>ware Graph <b>E</b>mbedding Learning method for <b>I</b>tem <b>Rec</b>ommendation(<b>CAEIRec</b>) is proposed. By adaptively constructing communities for nodes in the graph of entities, the correlations of nodes are embedded in graph learning from the aspect of community structure. Semantic information of users and items are jointly learnt in the embedding. Finally, the embeddings of users and items are fed to extend matrix factorization for getting the top recommendations. A series of comprehensive experiments are conducted on two different public datasets. The empirical results show that CAEIRec is an encouraging recommendation method by the comarison with the state-of-the-art methods. Source code of CAEIRec is available at https://github.com/a545187002/CAEIRec-tensorflow.</p>","PeriodicalId":501180,"journal":{"name":"World Wide Web","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138580742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entity alignment via graph neural networks: a component-level study 通过图形神经网络的实体对齐:一个组件级的研究
World Wide Web Pub Date : 2023-11-29 DOI: 10.1007/s11280-023-01221-8
Yanfeng Shu, Ji Zhang, Guangyan Huang, Chi-Hung Chi, Jing He
{"title":"Entity alignment via graph neural networks: a component-level study","authors":"Yanfeng Shu, Ji Zhang, Guangyan Huang, Chi-Hung Chi, Jing He","doi":"10.1007/s11280-023-01221-8","DOIUrl":"https://doi.org/10.1007/s11280-023-01221-8","url":null,"abstract":"<p>Entity alignment plays an essential role in the integration of knowledge graphs (KGs) as it seeks to identify entities that refer to the same real-world objects across different KGs. Recent research has primarily centred on embedding-based approaches. Among these approaches, there is a growing interest in graph neural networks (GNNs) due to their ability to capture complex relationships and incorporate node attributes within KGs. Despite the presence of several surveys in this area, they often lack comprehensive investigations specifically targeting GNN-based approaches. Moreover, they tend to evaluate overall performance without analysing the impact of individual components and methods. To bridge these gaps, this paper presents a framework for GNN-based entity alignment that captures the key characteristics of these approaches. We conduct a fine-grained analysis of individual components and assess their influences on alignment results. Our findings highlight specific module options that significantly affect the alignment outcomes. By carefully selecting suitable methods for combination, even basic GNN networks can achieve competitive alignment results.</p>","PeriodicalId":501180,"journal":{"name":"World Wide Web","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-domain aspect-based sentiment analysis using domain adversarial training 使用领域对抗训练的跨领域基于方面的情感分析
World Wide Web Pub Date : 2023-11-22 DOI: 10.1007/s11280-023-01217-4
Joris Knoester, Flavius Frasincar, Maria Mihaela Truşcǎ
{"title":"Cross-domain aspect-based sentiment analysis using domain adversarial training","authors":"Joris Knoester, Flavius Frasincar, Maria Mihaela Truşcǎ","doi":"10.1007/s11280-023-01217-4","DOIUrl":"https://doi.org/10.1007/s11280-023-01217-4","url":null,"abstract":"<p>Over the last decades, the increasing popularity of the Web came together with an extremely large volume of reviews on products and services useful for both companies and customers to adjust their behaviour with respect to the expressed opinions. Given this growth, Aspect-Based Sentiment Analysis (ABSA) has turned out to be an important tool required to understand people’s preferences. However, despite the large volume of data, the lack of data annotations restricts the supervised ABSA analysis to only a limited number of domains. To tackle this problem a transfer learning strategy is implemented by extending the state-of-the-art LCR-Rot-hop++ model for ABSA with the methodology of Domain Adversarial Training (DAT). The output is a cross-domain deep learning structure, called DAT-LCR-Rot-hop++. The major advantage of DAT-LCR-Rot-hop++ is the fact that it does not require any labeled target domain data. The results are obtained for six different domain combinations with testing accuracies ranging from 35% up until 74%, showing both the limitations and benefits of this approach. Once DAT-LCR-Rot-hop++ is able to find the similarities between domains, it produces good results. However, if the domains are too distant, it is not capable of generating domain-invariant features. This result is amplified by our additional analysis to add the neutral aspects to the positive or negative class. The performance of DAT-LCR-Rot-hop++ is very dependent on the similarity between distributions of source and target domain and the presence of a dominant sentiment class in the training set.</p>","PeriodicalId":501180,"journal":{"name":"World Wide Web","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Death comes but why: A multi-task memory-fused prediction for accurate and explainable illness severity in ICUs 死亡来了,但为什么:多任务记忆融合预测准确和可解释的重症监护疾病严重程度
World Wide Web Pub Date : 2023-11-16 DOI: 10.1007/s11280-023-01211-w
Weitong Chen, Wei Emma Zhang, Lin Yue
{"title":"Death comes but why: A multi-task memory-fused prediction for accurate and explainable illness severity in ICUs","authors":"Weitong Chen, Wei Emma Zhang, Lin Yue","doi":"10.1007/s11280-023-01211-w","DOIUrl":"https://doi.org/10.1007/s11280-023-01211-w","url":null,"abstract":"<p>Predicting the severity of an illness is crucial in intensive care units (ICUs) if a patient‘s life is to be saved. The existing prediction methods often fail to provide sufficient evidence for time-critical decisions required in dynamic and changing ICU environments. In this research, a new method called MM-RNN (multi-task memory-fused recurrent neural network) was developed to predict the severity of illnesses in intensive care units (ICUs). MM-RNN aims to address this issue by not only predicting illness severity but also generating an evidence-based explanation of how the prediction was made. The architecture of MM-RNN consists of task-specific phased LSTMs and a delta memory network that captures asynchronous feature correlations within and between multiple organ systems. The multi-task nature of MM-RNN allows it to provide an evidence-based explanation of its predictions, along with illness severity scores and a heatmap of the patient’s changing condition. The results of comparison with state-of-the-art methods on real-world clinical data show that MM-RNN delivers more accurate predictions of illness severity with the added benefit of providing evidence-based justifications.</p>","PeriodicalId":501180,"journal":{"name":"World Wide Web","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intrinsically motivated reinforcement learning based recommendation with counterfactual data augmentation 基于反事实数据增强的内在动机强化学习推荐
World Wide Web Pub Date : 2023-07-15 DOI: 10.1007/s11280-023-01187-7
Xiaocong Chen, Siyu Wang, Lianyong Qi, Yong Li, Lina Yao
{"title":"Intrinsically motivated reinforcement learning based recommendation with counterfactual data augmentation","authors":"Xiaocong Chen, Siyu Wang, Lianyong Qi, Yong Li, Lina Yao","doi":"10.1007/s11280-023-01187-7","DOIUrl":"https://doi.org/10.1007/s11280-023-01187-7","url":null,"abstract":"<p>Deep reinforcement learning (DRL) has shown promising results in modeling dynamic user preferences in RS in recent literature. However, training a DRL agent in the sparse RS environment poses a significant challenge. This is because the agent must balance between exploring informative user-item interaction trajectories and using existing trajectories for policy learning, a known exploration and exploitation trade-off. This trade-off greatly affects the recommendation performance when the environment is sparse. In DRL-based RS, balancing exploration and exploitation is even more challenging as the agent needs to deeply explore informative trajectories and efficiently exploit them in the context of RS. To address this issue, we propose a novel intrinsically motivated reinforcement learning (IMRL) method that enhances the agent’s capability to explore informative interaction trajectories in the sparse environment. We further enrich these trajectories via an adaptive counterfactual augmentation strategy with a customised threshold to improve their efficiency in exploitation. Our approach is evaluated on six offline datasets and three online simulation platforms, demonstrating its superiority over existing state-of-the-art methods. The extensive experiments show that our IMRL method outperforms other methods in terms of recommendation performance in the sparse RS environment.</p>","PeriodicalId":501180,"journal":{"name":"World Wide Web","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138510820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信