Neurocomputing最新文献

筛选
英文 中文
SF-GPT: A training-free method to enhance capabilities for knowledge graph construction in LLMs SF-GPT:一种无需训练的方法,可提高 LLM 中知识图谱构建的能力
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2024-10-21 DOI: 10.1016/j.neucom.2024.128726
Lizhuang Sun, Peng Zhang, Fang Gao, Yuan An, Zhixing Li, Yuanwei Zhao
{"title":"SF-GPT: A training-free method to enhance capabilities for knowledge graph construction in LLMs","authors":"Lizhuang Sun,&nbsp;Peng Zhang,&nbsp;Fang Gao,&nbsp;Yuan An,&nbsp;Zhixing Li,&nbsp;Yuanwei Zhao","doi":"10.1016/j.neucom.2024.128726","DOIUrl":"10.1016/j.neucom.2024.128726","url":null,"abstract":"<div><div>Knowledge graphs (KGs) are constructed by extracting knowledge triples from text and fusing knowledge, enhancing information retrieval efficiency. Current methods for knowledge triple extraction include ”Pretrain and Fine-tuning” and Large Language Models (LLMs). The former shifts effort from manual extraction to dataset annotation and suffers from performance degradation with different test and training set distributions. LLMs-based methods face errors and incompleteness in extraction. We introduce SF-GPT, a training-free method to address these issues. Firstly, we propose the Entity Extraction Filter (EEF) module to filter triple generation results, addressing evaluation and cleansing challenges. Secondly, we introduce a training-free Entity Alignment Module based on Entity Alias Generation (EAG), tackling semantic richness and interpretability issues in LLM-based knowledge fusion. Finally, our Self-Fusion Subgraph strategy uses multi-response self-fusion and a common entity list to filter triple results, reducing noise from LLMs’ multi-responses. In experiments, SF-GPT showed a 55.5% increase in recall and a 32.6% increase in F1 score on the BDNC dataset compared to the UniRel model trained on the NYT dataset and achieved a 5% improvement in F1 score compared to GPT-4+EEF baseline on the WebNLG dataset in the case of a fusion round of three. SF-GPT offers a promising way to extract knowledge from unstructured information.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128726"},"PeriodicalIF":5.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global and local semantic enhancement of samples for cross-modal hashing 跨模态散列样本的全局和局部语义增强
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2024-10-21 DOI: 10.1016/j.neucom.2024.128678
Shaohua Teng , Yongqi Chen , Zefeng Zheng , Wei Zhang , Peipei Kang , Naiqi Wu
{"title":"Global and local semantic enhancement of samples for cross-modal hashing","authors":"Shaohua Teng ,&nbsp;Yongqi Chen ,&nbsp;Zefeng Zheng ,&nbsp;Wei Zhang ,&nbsp;Peipei Kang ,&nbsp;Naiqi Wu","doi":"10.1016/j.neucom.2024.128678","DOIUrl":"10.1016/j.neucom.2024.128678","url":null,"abstract":"<div><div>Hashing becomes popular in cross-modal retrieval due to its exceptional performance in both search and storage. However, existing cross-modal hashing (CMH) methods may (a) neglect to learn sufficient modal-specific information, and (b) fail to fully exploit sample semantics. To address these issues, we propose a method called Semantic Enhancement of Sample Hashing (SESH). First, SESH employs a global modal-specific learning strategy to draw overall shared information and global modal-specific information by factoring the mapping matrix. Second, SESH introduces manifold learning and a local modal-specific learning strategy to extract additional local modal-specific and modal-shared data under label guidance. Meanwhile, local modal-specific information is integrated with global modal-specific details to add rich modal-specific information. Third, SESH uses discrete maximum similarity and orthogonal constraint transformation to enhance both global and local semantic information, embedding more discriminative information into the Hamming space. Finally, an efficient discrete optimization algorithm is proposed to generate the hash codes directly. Experiments on three datasets demonstrate the superior performance of SESH. The source code will be available at <span><span>https://github.com/kokorording/SESH</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128678"},"PeriodicalIF":5.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mamba-in-Mamba: Centralized Mamba-Cross-Scan in Tokenized Mamba Model for Hyperspectral image classification 曼巴中的曼巴:用于高光谱图像分类的标记化曼巴模型中的集中式曼巴交叉扫描
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2024-10-21 DOI: 10.1016/j.neucom.2024.128751
Weilian Zhou , Sei-ichiro Kamata , Haipeng Wang , Man Sing Wong , Huiying (Cynthia) Hou
{"title":"Mamba-in-Mamba: Centralized Mamba-Cross-Scan in Tokenized Mamba Model for Hyperspectral image classification","authors":"Weilian Zhou ,&nbsp;Sei-ichiro Kamata ,&nbsp;Haipeng Wang ,&nbsp;Man Sing Wong ,&nbsp;Huiying (Cynthia) Hou","doi":"10.1016/j.neucom.2024.128751","DOIUrl":"10.1016/j.neucom.2024.128751","url":null,"abstract":"<div><div>Hyperspectral image (HSI) classification plays a crucial role in remote sensing (RS) applications, enabling the precise identification of materials and land cover based on spectral information. This supports tasks such as agricultural management and urban planning. While sequential neural models like Recurrent Neural Networks (RNNs) and Transformers have been adapted for this task, they present limitations: RNNs struggle with feature aggregation and are sensitive to noise from interfering pixels, whereas Transformers require extensive computational resources and tend to underperform when HSI datasets contain limited or unbalanced training samples. To address these challenges, Mamba architectures have emerged, offering a balance between RNNs and Transformers by leveraging lightweight, parallel scanning capabilities. Although models like Vision Mamba (ViM) and Visual Mamba (VMamba) have demonstrated improvements in visual tasks, their application to HSI classification remains underexplored, particularly in handling land-cover semantic tokens and multi-scale feature aggregation for patch-wise classifiers. In response, this study introduces the Mamba-in-Mamba (MiM) architecture for HSI classification, marking a pioneering effort in this domain. The MiM model features: (1) a novel centralized Mamba-Cross-Scan (MCS) mechanism for efficient image-to-sequence data transformation; (2) a Tokenized Mamba (T-Mamba) encoder that incorporates a Gaussian Decay Mask (GDM), Semantic Token Learner (STL), and Semantic Token Fuser (STF) for enhanced feature generation; and (3) a Weighted MCS Fusion (WMF) module with a Multi-Scale Loss Design for improved training efficiency. Experimental results on four public HSI datasets—Indian Pines, Pavia University, Houston2013, and WHU-Hi-Honghu—demonstrate that our method achieves an overall accuracy improvement of up to 3.3%, 2.7%, 1.5%, and 2.3% over state-of-the-art approaches (i.e., SSFTT, MAEST, etc.) under both fixed and disjoint training-testing settings.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128751"},"PeriodicalIF":5.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Symbolic equation solving via reinforcement learning 通过强化学习解符号方程
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2024-10-21 DOI: 10.1016/j.neucom.2024.128732
Lennart Dabelow , Masahito Ueda
{"title":"Symbolic equation solving via reinforcement learning","authors":"Lennart Dabelow ,&nbsp;Masahito Ueda","doi":"10.1016/j.neucom.2024.128732","DOIUrl":"10.1016/j.neucom.2024.128732","url":null,"abstract":"<div><div>Machine-learning methods are rapidly being adopted in a wide variety of social, economic, and scientific contexts, yet they are notorious for struggling with exact mathematics. A typical example is computer algebra, which includes tasks like simplifying mathematical terms, calculating formal derivatives, or finding exact solutions of algebraic equations. Traditional software packages for these purposes are commonly based on a huge database of rules for how a specific operation (e.g., differentiation) transforms a certain term (e.g., sine function) into another one (e.g., cosine function). These rules have usually needed to be discovered and subsequently programmed by humans. Efforts to automate this process by machine-learning approaches are faced with challenges like the singular nature of solutions to mathematical problems, when approximations are unacceptable, as well as hallucination effects leading to flawed reasoning. We propose a novel deep-learning interface involving a reinforcement-learning agent that operates a symbolic stack calculator to explore mathematical relations. By construction, this system is capable of exact transformations and immune to hallucination. Using the paradigmatic example of solving linear equations in symbolic form, we demonstrate how our reinforcement-learning agent autonomously discovers elementary transformation rules and step-by-step solutions.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128732"},"PeriodicalIF":5.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Word vector embedding and self-supplementing network for Generalized Few-shot Semantic Segmentation 词向量嵌入和自补充网络用于广义少镜头语义分割
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2024-10-21 DOI: 10.1016/j.neucom.2024.128737
Xiaowei Wang, Qiong Chen, Yong Yang
{"title":"Word vector embedding and self-supplementing network for Generalized Few-shot Semantic Segmentation","authors":"Xiaowei Wang,&nbsp;Qiong Chen,&nbsp;Yong Yang","doi":"10.1016/j.neucom.2024.128737","DOIUrl":"10.1016/j.neucom.2024.128737","url":null,"abstract":"<div><div>Under the condition of sufficient base class samples and a few novel class samples, Generalized Few-shot Semantic Segmentation (GFSS) classifies each pixel in the query image as base class, novel class, or background. A standard GFSS approach involves two training stages: base class learning and novel class updating. However, inter-class interference and information loss which contribute to the poor performance of GFSS, have not been synthetical considered. To address the problem, we propose an Embedded-Self-Supplementing Network (ESSNet), i.e., semantic word embedding and query set self-supplementing information to enhance segmentation accuracy. Specifically, the semantic word embedding module employs distance information between word vectors to assist the model in learning the distance between class prototypes. In order to transform the semantic word vector prototypes from the semantic space to the visual embedding space, we designed a triplet loss function to supervise the word vector embedding module, where the word vector prototype serves as an anchor and positive-negative samples are collected among the general features of the support image. To compensate for the information loss caused by using prototypes to represent classes, we propose a self-supplementing module to mine the information contained in the query image. Specifically, this module first makes a preliminary prediction on the query image, then selects high-confidence area to form pseudo labels, and finally uses pseudo labels to extract query prototypes to supplement the missing information. Extensive experiments on PASCAL-5<span><math><msup><mrow></mrow><mrow><mi>i</mi></mrow></msup></math></span> and COCO-20<span><math><msup><mrow></mrow><mrow><mi>i</mi></mrow></msup></math></span> show that ESSNet has superior performance and outperforms state-of-the-art methods in all settings.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128737"},"PeriodicalIF":5.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive review of deep learning for medical image segmentation 深度学习在医学图像分割中的应用综述
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2024-10-20 DOI: 10.1016/j.neucom.2024.128740
Qingling Xia , Hong Zheng , Haonan Zou , Dinghao Luo , Hongan Tang , Lingxiao Li , Bin Jiang
{"title":"A comprehensive review of deep learning for medical image segmentation","authors":"Qingling Xia ,&nbsp;Hong Zheng ,&nbsp;Haonan Zou ,&nbsp;Dinghao Luo ,&nbsp;Hongan Tang ,&nbsp;Lingxiao Li ,&nbsp;Bin Jiang","doi":"10.1016/j.neucom.2024.128740","DOIUrl":"10.1016/j.neucom.2024.128740","url":null,"abstract":"<div><div>Medical image segmentation provides detailed mappings of regions of interest, facilitating precise identification of critical areas and greatly aiding in the diagnosis, treatment, and understanding of diverse medical conditions. However, conventional techniques frequently rely on hand-crafted feature-based approaches, posing challenges when dealing with complex medical images, leading to issues such as low accuracy and sensitivity to noise. Recent years have seen substantial research focused on the effectiveness of deep learning models for segmenting medical images. In this study, we present a comprehensive review of the various deep learning-based approaches for medical image segmentation and provide a detailed analysis of their contributions to the domain. These methods can be broadly categorized into five groups: CNN-based methods, Transformer-based methods, Mamba-based methods, semi-supervised learning methods, and weakly supervised learning methods. Convolutional Neural Networks (CNNs), with their efficient feature self-learning, have driven major advances in medical image segmentation. Subsequently, Transformers, leveraging self-attention mechanisms, have achieved performance on par with or surpassing Convolutional Neural Networks. Mamba-based methods, as a novel selective state-space model, are emerging as a promising direction. Furthermore, due to the limited availability of annotated medical images, research in weakly supervised and semi-supervised learning continues to evolve. This review covers common evaluation methods, datasets, and deep learning applications in diagnosing and treating skin lesions, hippocampus, tumors, and polyps. Finally, we identify key challenges such as limited data, diverse modalities, noise, and clinical applicability, and propose future research in zero-shot segmentation, transfer learning, and multi-modal techniques to advance the development of medical image segmentation technology.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128740"},"PeriodicalIF":5.5,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-attribute balanced dataset generation framework AutoSyn and KinFace Channel-Spatial Feature Extractor for kinship recognition 用于亲属关系识别的多属性均衡数据集生成框架 AutoSyn 和 KinFace 通道空间特征提取器
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2024-10-20 DOI: 10.1016/j.neucom.2024.128750
Jia-Xuan Jiang , Hongsheng Jing , Ling Zhou , Yuee Li , Zhong Wang
{"title":"Multi-attribute balanced dataset generation framework AutoSyn and KinFace Channel-Spatial Feature Extractor for kinship recognition","authors":"Jia-Xuan Jiang ,&nbsp;Hongsheng Jing ,&nbsp;Ling Zhou ,&nbsp;Yuee Li ,&nbsp;Zhong Wang","doi":"10.1016/j.neucom.2024.128750","DOIUrl":"10.1016/j.neucom.2024.128750","url":null,"abstract":"<div><div>In the field of kinship verification, facial recognition technology is becoming increasingly vital due to privacy concerns, ethical disputes, and the high costs associated with DNA testing. We have developed a novel method, the AutoSyn framework, to synthesize facial images and enhance kinship image datasets, effectively addressing the challenges of scale and quality in existing datasets. By employing a strategy that mixes ages and genders in the synthesized images, we minimize the impact of these factors on kinship recognition tasks. We have enhanced the original KinFaceW-I dataset by integrating ten distinct styles, including diverse combinations of gender, ethnicity, and age. This enrichment significantly improves both the quality and quantity of the images. Furthermore, this paper introduces an efficient feature extractor for kinship tasks, KinFace-CSFE, within a siamese neural network framework. This model not only utilizes meticulously designed channel feature extraction but also incorporates mixed kernel size spatial attention mechanisms to better focus on local features. We have also integrated YOCO data augmentation techniques to simulate complex imaging conditions, enhancing the model’s robustness and accuracy. The effectiveness of these innovations has been validated through experiments on the KinFaceW-I, KinFaceW-II, and synthesized Syn-KinFaceW-I datasets, achieving accuracy rates of 82.7%, 94.1%, and 83.2% respectively. These results significantly surpass both traditional models and current advanced models.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128750"},"PeriodicalIF":5.5,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D skeleton aware driver behavior recognition framework for autonomous driving system 用于自动驾驶系统的三维骨骼感知驾驶员行为识别框架
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2024-10-19 DOI: 10.1016/j.neucom.2024.128743
Rongtian Huo , Junkang Chen , Ye Zhang , Qing Gao
{"title":"3D skeleton aware driver behavior recognition framework for autonomous driving system","authors":"Rongtian Huo ,&nbsp;Junkang Chen ,&nbsp;Ye Zhang ,&nbsp;Qing Gao","doi":"10.1016/j.neucom.2024.128743","DOIUrl":"10.1016/j.neucom.2024.128743","url":null,"abstract":"<div><div>The recognition of the driver’s behaviors inside an autonomous vehicle can effectively address emergency handling in autonomous driving and is crucial for ensuring the driver’s safety. Driver behavior recognition is a challenging task due to factors such as variations, diversities, complexities, and strong interferences in behaviors. In this paper, to realize the application in the autonomous driving scenes, a novel 3D skeleton aware behavior recognition framework is proposed to recognize various driver behaviors in autonomous driving systems. First, a 3D human pose estimation network (Pose-GTFNet) with temporal Transformer and spatial graph convolutional network (GCN) is designed to infer 3D human poses from 2D pose sequences. Second, based on the obtained 3D human pose sequences, a behavior recognition network (Beh-MSFNet) with multi-skeleton feature fusion is designed to recognize driver behaviors. In the experiments, the Pose-GTFNet and Beh-MSFNet can get the best performance compared with most state-of-the-art (SOTA) methods on the Human3.6M human pose dataset, JHMDB and SHREC action recognition dataset, respectively. In addition, the proposed driver behavior recognition framework can achieve SOTA performance on the Drive&amp;Act and Driver-Skeleton driver behavior datasets.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128743"},"PeriodicalIF":5.5,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
H3NI: Non-target-specific node injection attacks on hypergraph neural networks via genetic algorithm H3NI:通过遗传算法对超图谱神经网络进行非特定目标节点注入攻击
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2024-10-19 DOI: 10.1016/j.neucom.2024.128746
Heyuan Shi , Binqi Zeng , Ruishi Yu , Yulin Yang , Zijian Zouxia , Chao Hu , Ronghua Shi
{"title":"H3NI: Non-target-specific node injection attacks on hypergraph neural networks via genetic algorithm","authors":"Heyuan Shi ,&nbsp;Binqi Zeng ,&nbsp;Ruishi Yu ,&nbsp;Yulin Yang ,&nbsp;Zijian Zouxia ,&nbsp;Chao Hu ,&nbsp;Ronghua Shi","doi":"10.1016/j.neucom.2024.128746","DOIUrl":"10.1016/j.neucom.2024.128746","url":null,"abstract":"<div><div>Node injection attack is widely used in graph neural networks (GNNs) attacks, which misleads GNNs by injecting nodes. Though hypergraph neural networks (HNNs) are an extension of GNNs, node injection attacks have not yet been studied in HNNs. Since each edge of a hypergraph can connect more than two nodes, existing node injection methods designed for GNNs cannot effectively select the hyperedges connected to the injected nodes when applied to hypergraphs. In this paper, we propose a <u>H</u>ypergraph <u>N</u>eural <u>N</u>etwork <u>N</u>ode <u>I</u>njection attack method called H3NI, which utilizes a genetic algorithm and a predefined budget model to implement the first black-box node injection framework designed for HNNs attacks. We conducted experiments on the datasets of Cora co-authorship and co-citation. Experimental results show the effectiveness and superior performance of H3NI in attacking HNNs, which reduces the model accuracy by 18%–20% within 5% of the total injected nodes and achieves a 2-4X improvement compared to existing gradient-based node injection methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128746"},"PeriodicalIF":5.5,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning accurate neighborhood- and self-information for higher-order relation prediction in Heterogeneous Information Networks 为异构信息网络中的高阶关系预测学习准确的邻接信息和自身信息
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2024-10-19 DOI: 10.1016/j.neucom.2024.128739
Jie Li , Xuan Guo , Pengfei Jiao , Wenjun Wang
{"title":"Learning accurate neighborhood- and self-information for higher-order relation prediction in Heterogeneous Information Networks","authors":"Jie Li ,&nbsp;Xuan Guo ,&nbsp;Pengfei Jiao ,&nbsp;Wenjun Wang","doi":"10.1016/j.neucom.2024.128739","DOIUrl":"10.1016/j.neucom.2024.128739","url":null,"abstract":"<div><div>Heterogeneous Information Networks (HINs) are commonly employed to model complex real-world scenarios with diverse node and edge types. However, due to constraints in data collection and processing, constructed networks often lack certain relations. Consequently, various methods have emerged, particularly recently, leveraging heterogeneous graph neural networks (HGNNs) to predict missing relations. Nevertheless, these methods primarily focus on pairwise relations between two nodes. Real-world interactions, however, often involve multiple nodes and diverse types, extending beyond simple pairwise relations. For instance, academic collaboration networks may entail interactions among authors, papers, and conferences simultaneously. Despite their prevalence, higher-order relations are often overlooked. While HGNNs are effective at learning network structures, they may suffer from over-smoothing, resulting in similar representations for nodes and their neighbors. The learned inaccurate proximity among nodes impedes the discernment of higher-order relations. Furthermore, observed edges among a target group of nodes can provide valuable evidence for predicting higher-order relations. To address these challenges, we propose a novel model called Accurate Neighborhood- and Self-information Enhanced Heterogeneous Graph Neural Network (ANSE-HGN). Building upon HGNNs to encode network structure and attributes, we introduce a relation-based neighborhood encoder to capture information within multi-hop neighborhoods in heterogeneous higher-order relations. This enables the calculation of accurate proximity among target groups of nodes, thereby enhancing prediction accuracy. Additionally, we leverage self-information from observed higher-order relations as an auxiliary loss to reinforce the learning process. Extensive experiments on four real-world datasets demonstrate the superiority of our proposed method in higher-order relation prediction tasks.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128739"},"PeriodicalIF":5.5,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信