{"title":"SF-GPT: A training-free method to enhance capabilities for knowledge graph construction in LLMs","authors":"Lizhuang Sun, Peng Zhang, Fang Gao, Yuan An, Zhixing Li, Yuanwei Zhao","doi":"10.1016/j.neucom.2024.128726","DOIUrl":"10.1016/j.neucom.2024.128726","url":null,"abstract":"<div><div>Knowledge graphs (KGs) are constructed by extracting knowledge triples from text and fusing knowledge, enhancing information retrieval efficiency. Current methods for knowledge triple extraction include ”Pretrain and Fine-tuning” and Large Language Models (LLMs). The former shifts effort from manual extraction to dataset annotation and suffers from performance degradation with different test and training set distributions. LLMs-based methods face errors and incompleteness in extraction. We introduce SF-GPT, a training-free method to address these issues. Firstly, we propose the Entity Extraction Filter (EEF) module to filter triple generation results, addressing evaluation and cleansing challenges. Secondly, we introduce a training-free Entity Alignment Module based on Entity Alias Generation (EAG), tackling semantic richness and interpretability issues in LLM-based knowledge fusion. Finally, our Self-Fusion Subgraph strategy uses multi-response self-fusion and a common entity list to filter triple results, reducing noise from LLMs’ multi-responses. In experiments, SF-GPT showed a 55.5% increase in recall and a 32.6% increase in F1 score on the BDNC dataset compared to the UniRel model trained on the NYT dataset and achieved a 5% improvement in F1 score compared to GPT-4+EEF baseline on the WebNLG dataset in the case of a fusion round of three. SF-GPT offers a promising way to extract knowledge from unstructured information.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128726"},"PeriodicalIF":5.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-10-21DOI: 10.1016/j.neucom.2024.128678
Shaohua Teng , Yongqi Chen , Zefeng Zheng , Wei Zhang , Peipei Kang , Naiqi Wu
{"title":"Global and local semantic enhancement of samples for cross-modal hashing","authors":"Shaohua Teng , Yongqi Chen , Zefeng Zheng , Wei Zhang , Peipei Kang , Naiqi Wu","doi":"10.1016/j.neucom.2024.128678","DOIUrl":"10.1016/j.neucom.2024.128678","url":null,"abstract":"<div><div>Hashing becomes popular in cross-modal retrieval due to its exceptional performance in both search and storage. However, existing cross-modal hashing (CMH) methods may (a) neglect to learn sufficient modal-specific information, and (b) fail to fully exploit sample semantics. To address these issues, we propose a method called Semantic Enhancement of Sample Hashing (SESH). First, SESH employs a global modal-specific learning strategy to draw overall shared information and global modal-specific information by factoring the mapping matrix. Second, SESH introduces manifold learning and a local modal-specific learning strategy to extract additional local modal-specific and modal-shared data under label guidance. Meanwhile, local modal-specific information is integrated with global modal-specific details to add rich modal-specific information. Third, SESH uses discrete maximum similarity and orthogonal constraint transformation to enhance both global and local semantic information, embedding more discriminative information into the Hamming space. Finally, an efficient discrete optimization algorithm is proposed to generate the hash codes directly. Experiments on three datasets demonstrate the superior performance of SESH. The source code will be available at <span><span>https://github.com/kokorording/SESH</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128678"},"PeriodicalIF":5.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-10-21DOI: 10.1016/j.neucom.2024.128751
Weilian Zhou , Sei-ichiro Kamata , Haipeng Wang , Man Sing Wong , Huiying (Cynthia) Hou
{"title":"Mamba-in-Mamba: Centralized Mamba-Cross-Scan in Tokenized Mamba Model for Hyperspectral image classification","authors":"Weilian Zhou , Sei-ichiro Kamata , Haipeng Wang , Man Sing Wong , Huiying (Cynthia) Hou","doi":"10.1016/j.neucom.2024.128751","DOIUrl":"10.1016/j.neucom.2024.128751","url":null,"abstract":"<div><div>Hyperspectral image (HSI) classification plays a crucial role in remote sensing (RS) applications, enabling the precise identification of materials and land cover based on spectral information. This supports tasks such as agricultural management and urban planning. While sequential neural models like Recurrent Neural Networks (RNNs) and Transformers have been adapted for this task, they present limitations: RNNs struggle with feature aggregation and are sensitive to noise from interfering pixels, whereas Transformers require extensive computational resources and tend to underperform when HSI datasets contain limited or unbalanced training samples. To address these challenges, Mamba architectures have emerged, offering a balance between RNNs and Transformers by leveraging lightweight, parallel scanning capabilities. Although models like Vision Mamba (ViM) and Visual Mamba (VMamba) have demonstrated improvements in visual tasks, their application to HSI classification remains underexplored, particularly in handling land-cover semantic tokens and multi-scale feature aggregation for patch-wise classifiers. In response, this study introduces the Mamba-in-Mamba (MiM) architecture for HSI classification, marking a pioneering effort in this domain. The MiM model features: (1) a novel centralized Mamba-Cross-Scan (MCS) mechanism for efficient image-to-sequence data transformation; (2) a Tokenized Mamba (T-Mamba) encoder that incorporates a Gaussian Decay Mask (GDM), Semantic Token Learner (STL), and Semantic Token Fuser (STF) for enhanced feature generation; and (3) a Weighted MCS Fusion (WMF) module with a Multi-Scale Loss Design for improved training efficiency. Experimental results on four public HSI datasets—Indian Pines, Pavia University, Houston2013, and WHU-Hi-Honghu—demonstrate that our method achieves an overall accuracy improvement of up to 3.3%, 2.7%, 1.5%, and 2.3% over state-of-the-art approaches (i.e., SSFTT, MAEST, etc.) under both fixed and disjoint training-testing settings.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128751"},"PeriodicalIF":5.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-10-21DOI: 10.1016/j.neucom.2024.128732
Lennart Dabelow , Masahito Ueda
{"title":"Symbolic equation solving via reinforcement learning","authors":"Lennart Dabelow , Masahito Ueda","doi":"10.1016/j.neucom.2024.128732","DOIUrl":"10.1016/j.neucom.2024.128732","url":null,"abstract":"<div><div>Machine-learning methods are rapidly being adopted in a wide variety of social, economic, and scientific contexts, yet they are notorious for struggling with exact mathematics. A typical example is computer algebra, which includes tasks like simplifying mathematical terms, calculating formal derivatives, or finding exact solutions of algebraic equations. Traditional software packages for these purposes are commonly based on a huge database of rules for how a specific operation (e.g., differentiation) transforms a certain term (e.g., sine function) into another one (e.g., cosine function). These rules have usually needed to be discovered and subsequently programmed by humans. Efforts to automate this process by machine-learning approaches are faced with challenges like the singular nature of solutions to mathematical problems, when approximations are unacceptable, as well as hallucination effects leading to flawed reasoning. We propose a novel deep-learning interface involving a reinforcement-learning agent that operates a symbolic stack calculator to explore mathematical relations. By construction, this system is capable of exact transformations and immune to hallucination. Using the paradigmatic example of solving linear equations in symbolic form, we demonstrate how our reinforcement-learning agent autonomously discovers elementary transformation rules and step-by-step solutions.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128732"},"PeriodicalIF":5.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-10-21DOI: 10.1016/j.neucom.2024.128737
Xiaowei Wang, Qiong Chen, Yong Yang
{"title":"Word vector embedding and self-supplementing network for Generalized Few-shot Semantic Segmentation","authors":"Xiaowei Wang, Qiong Chen, Yong Yang","doi":"10.1016/j.neucom.2024.128737","DOIUrl":"10.1016/j.neucom.2024.128737","url":null,"abstract":"<div><div>Under the condition of sufficient base class samples and a few novel class samples, Generalized Few-shot Semantic Segmentation (GFSS) classifies each pixel in the query image as base class, novel class, or background. A standard GFSS approach involves two training stages: base class learning and novel class updating. However, inter-class interference and information loss which contribute to the poor performance of GFSS, have not been synthetical considered. To address the problem, we propose an Embedded-Self-Supplementing Network (ESSNet), i.e., semantic word embedding and query set self-supplementing information to enhance segmentation accuracy. Specifically, the semantic word embedding module employs distance information between word vectors to assist the model in learning the distance between class prototypes. In order to transform the semantic word vector prototypes from the semantic space to the visual embedding space, we designed a triplet loss function to supervise the word vector embedding module, where the word vector prototype serves as an anchor and positive-negative samples are collected among the general features of the support image. To compensate for the information loss caused by using prototypes to represent classes, we propose a self-supplementing module to mine the information contained in the query image. Specifically, this module first makes a preliminary prediction on the query image, then selects high-confidence area to form pseudo labels, and finally uses pseudo labels to extract query prototypes to supplement the missing information. Extensive experiments on PASCAL-5<span><math><msup><mrow></mrow><mrow><mi>i</mi></mrow></msup></math></span> and COCO-20<span><math><msup><mrow></mrow><mrow><mi>i</mi></mrow></msup></math></span> show that ESSNet has superior performance and outperforms state-of-the-art methods in all settings.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128737"},"PeriodicalIF":5.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-10-20DOI: 10.1016/j.neucom.2024.128740
Qingling Xia , Hong Zheng , Haonan Zou , Dinghao Luo , Hongan Tang , Lingxiao Li , Bin Jiang
{"title":"A comprehensive review of deep learning for medical image segmentation","authors":"Qingling Xia , Hong Zheng , Haonan Zou , Dinghao Luo , Hongan Tang , Lingxiao Li , Bin Jiang","doi":"10.1016/j.neucom.2024.128740","DOIUrl":"10.1016/j.neucom.2024.128740","url":null,"abstract":"<div><div>Medical image segmentation provides detailed mappings of regions of interest, facilitating precise identification of critical areas and greatly aiding in the diagnosis, treatment, and understanding of diverse medical conditions. However, conventional techniques frequently rely on hand-crafted feature-based approaches, posing challenges when dealing with complex medical images, leading to issues such as low accuracy and sensitivity to noise. Recent years have seen substantial research focused on the effectiveness of deep learning models for segmenting medical images. In this study, we present a comprehensive review of the various deep learning-based approaches for medical image segmentation and provide a detailed analysis of their contributions to the domain. These methods can be broadly categorized into five groups: CNN-based methods, Transformer-based methods, Mamba-based methods, semi-supervised learning methods, and weakly supervised learning methods. Convolutional Neural Networks (CNNs), with their efficient feature self-learning, have driven major advances in medical image segmentation. Subsequently, Transformers, leveraging self-attention mechanisms, have achieved performance on par with or surpassing Convolutional Neural Networks. Mamba-based methods, as a novel selective state-space model, are emerging as a promising direction. Furthermore, due to the limited availability of annotated medical images, research in weakly supervised and semi-supervised learning continues to evolve. This review covers common evaluation methods, datasets, and deep learning applications in diagnosing and treating skin lesions, hippocampus, tumors, and polyps. Finally, we identify key challenges such as limited data, diverse modalities, noise, and clinical applicability, and propose future research in zero-shot segmentation, transfer learning, and multi-modal techniques to advance the development of medical image segmentation technology.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128740"},"PeriodicalIF":5.5,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-10-20DOI: 10.1016/j.neucom.2024.128750
Jia-Xuan Jiang , Hongsheng Jing , Ling Zhou , Yuee Li , Zhong Wang
{"title":"Multi-attribute balanced dataset generation framework AutoSyn and KinFace Channel-Spatial Feature Extractor for kinship recognition","authors":"Jia-Xuan Jiang , Hongsheng Jing , Ling Zhou , Yuee Li , Zhong Wang","doi":"10.1016/j.neucom.2024.128750","DOIUrl":"10.1016/j.neucom.2024.128750","url":null,"abstract":"<div><div>In the field of kinship verification, facial recognition technology is becoming increasingly vital due to privacy concerns, ethical disputes, and the high costs associated with DNA testing. We have developed a novel method, the AutoSyn framework, to synthesize facial images and enhance kinship image datasets, effectively addressing the challenges of scale and quality in existing datasets. By employing a strategy that mixes ages and genders in the synthesized images, we minimize the impact of these factors on kinship recognition tasks. We have enhanced the original KinFaceW-I dataset by integrating ten distinct styles, including diverse combinations of gender, ethnicity, and age. This enrichment significantly improves both the quality and quantity of the images. Furthermore, this paper introduces an efficient feature extractor for kinship tasks, KinFace-CSFE, within a siamese neural network framework. This model not only utilizes meticulously designed channel feature extraction but also incorporates mixed kernel size spatial attention mechanisms to better focus on local features. We have also integrated YOCO data augmentation techniques to simulate complex imaging conditions, enhancing the model’s robustness and accuracy. The effectiveness of these innovations has been validated through experiments on the KinFaceW-I, KinFaceW-II, and synthesized Syn-KinFaceW-I datasets, achieving accuracy rates of 82.7%, 94.1%, and 83.2% respectively. These results significantly surpass both traditional models and current advanced models.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128750"},"PeriodicalIF":5.5,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-10-19DOI: 10.1016/j.neucom.2024.128743
Rongtian Huo , Junkang Chen , Ye Zhang , Qing Gao
{"title":"3D skeleton aware driver behavior recognition framework for autonomous driving system","authors":"Rongtian Huo , Junkang Chen , Ye Zhang , Qing Gao","doi":"10.1016/j.neucom.2024.128743","DOIUrl":"10.1016/j.neucom.2024.128743","url":null,"abstract":"<div><div>The recognition of the driver’s behaviors inside an autonomous vehicle can effectively address emergency handling in autonomous driving and is crucial for ensuring the driver’s safety. Driver behavior recognition is a challenging task due to factors such as variations, diversities, complexities, and strong interferences in behaviors. In this paper, to realize the application in the autonomous driving scenes, a novel 3D skeleton aware behavior recognition framework is proposed to recognize various driver behaviors in autonomous driving systems. First, a 3D human pose estimation network (Pose-GTFNet) with temporal Transformer and spatial graph convolutional network (GCN) is designed to infer 3D human poses from 2D pose sequences. Second, based on the obtained 3D human pose sequences, a behavior recognition network (Beh-MSFNet) with multi-skeleton feature fusion is designed to recognize driver behaviors. In the experiments, the Pose-GTFNet and Beh-MSFNet can get the best performance compared with most state-of-the-art (SOTA) methods on the Human3.6M human pose dataset, JHMDB and SHREC action recognition dataset, respectively. In addition, the proposed driver behavior recognition framework can achieve SOTA performance on the Drive&Act and Driver-Skeleton driver behavior datasets.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128743"},"PeriodicalIF":5.5,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-10-19DOI: 10.1016/j.neucom.2024.128746
Heyuan Shi , Binqi Zeng , Ruishi Yu , Yulin Yang , Zijian Zouxia , Chao Hu , Ronghua Shi
{"title":"H3NI: Non-target-specific node injection attacks on hypergraph neural networks via genetic algorithm","authors":"Heyuan Shi , Binqi Zeng , Ruishi Yu , Yulin Yang , Zijian Zouxia , Chao Hu , Ronghua Shi","doi":"10.1016/j.neucom.2024.128746","DOIUrl":"10.1016/j.neucom.2024.128746","url":null,"abstract":"<div><div>Node injection attack is widely used in graph neural networks (GNNs) attacks, which misleads GNNs by injecting nodes. Though hypergraph neural networks (HNNs) are an extension of GNNs, node injection attacks have not yet been studied in HNNs. Since each edge of a hypergraph can connect more than two nodes, existing node injection methods designed for GNNs cannot effectively select the hyperedges connected to the injected nodes when applied to hypergraphs. In this paper, we propose a <u>H</u>ypergraph <u>N</u>eural <u>N</u>etwork <u>N</u>ode <u>I</u>njection attack method called H3NI, which utilizes a genetic algorithm and a predefined budget model to implement the first black-box node injection framework designed for HNNs attacks. We conducted experiments on the datasets of Cora co-authorship and co-citation. Experimental results show the effectiveness and superior performance of H3NI in attacking HNNs, which reduces the model accuracy by 18%–20% within 5% of the total injected nodes and achieves a 2-4X improvement compared to existing gradient-based node injection methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128746"},"PeriodicalIF":5.5,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-10-19DOI: 10.1016/j.neucom.2024.128739
Jie Li , Xuan Guo , Pengfei Jiao , Wenjun Wang
{"title":"Learning accurate neighborhood- and self-information for higher-order relation prediction in Heterogeneous Information Networks","authors":"Jie Li , Xuan Guo , Pengfei Jiao , Wenjun Wang","doi":"10.1016/j.neucom.2024.128739","DOIUrl":"10.1016/j.neucom.2024.128739","url":null,"abstract":"<div><div>Heterogeneous Information Networks (HINs) are commonly employed to model complex real-world scenarios with diverse node and edge types. However, due to constraints in data collection and processing, constructed networks often lack certain relations. Consequently, various methods have emerged, particularly recently, leveraging heterogeneous graph neural networks (HGNNs) to predict missing relations. Nevertheless, these methods primarily focus on pairwise relations between two nodes. Real-world interactions, however, often involve multiple nodes and diverse types, extending beyond simple pairwise relations. For instance, academic collaboration networks may entail interactions among authors, papers, and conferences simultaneously. Despite their prevalence, higher-order relations are often overlooked. While HGNNs are effective at learning network structures, they may suffer from over-smoothing, resulting in similar representations for nodes and their neighbors. The learned inaccurate proximity among nodes impedes the discernment of higher-order relations. Furthermore, observed edges among a target group of nodes can provide valuable evidence for predicting higher-order relations. To address these challenges, we propose a novel model called Accurate Neighborhood- and Self-information Enhanced Heterogeneous Graph Neural Network (ANSE-HGN). Building upon HGNNs to encode network structure and attributes, we introduce a relation-based neighborhood encoder to capture information within multi-hop neighborhoods in heterogeneous higher-order relations. This enables the calculation of accurate proximity among target groups of nodes, thereby enhancing prediction accuracy. Additionally, we leverage self-information from observed higher-order relations as an auxiliary loss to reinforce the learning process. Extensive experiments on four real-world datasets demonstrate the superiority of our proposed method in higher-order relation prediction tasks.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"613 ","pages":"Article 128739"},"PeriodicalIF":5.5,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}