Visual Computing for Industry Biomedicine and Art最新文献

筛选
英文 中文
PlaqueNet: deep learning enabled coronary artery plaque segmentation from coronary computed tomography angiography. PlaqueNet:通过深度学习从冠状动脉计算机断层扫描血管造影中分割冠状动脉斑块。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2024-03-22 DOI: 10.1186/s42492-024-00157-8
Linyuan Wang, Xiaofeng Zhang, Congyu Tian, Shu Chen, Yongzhi Deng, Xiangyun Liao, Qiong Wang, Weixin Si
{"title":"PlaqueNet: deep learning enabled coronary artery plaque segmentation from coronary computed tomography angiography.","authors":"Linyuan Wang, Xiaofeng Zhang, Congyu Tian, Shu Chen, Yongzhi Deng, Xiangyun Liao, Qiong Wang, Weixin Si","doi":"10.1186/s42492-024-00157-8","DOIUrl":"10.1186/s42492-024-00157-8","url":null,"abstract":"<p><p>Cardiovascular disease, primarily caused by atherosclerotic plaque formation, is a significant health concern. The early detection of these plaques is crucial for targeted therapies and reducing the risk of cardiovascular diseases. This study presents PlaqueNet, a solution for segmenting coronary artery plaques from coronary computed tomography angiography (CCTA) images. For feature extraction, the advanced residual net module was utilized, which integrates a deepwise residual optimization module into network branches, enhances feature extraction capabilities, avoiding information loss, and addresses gradient issues during training. To improve segmentation accuracy, a depthwise atrous spatial pyramid pooling based on bicubic efficient channel attention (DASPP-BICECA) module is introduced. The BICECA component amplifies the local feature sensitivity, whereas the DASPP component expands the network's information-gathering scope, resulting in elevated segmentation accuracy. Additionally, BINet, a module for joint network loss evaluation, is proposed. It optimizes the segmentation model without affecting the segmentation results. When combined with the DASPP-BICECA module, BINet enhances overall efficiency. The CCTA segmentation algorithm proposed in this study outperformed the other three comparative algorithms, achieving an intersection over Union of 87.37%, Dice of 93.26%, accuracy of 93.12%, mean intersection over Union of 93.68%, mean Dice of 96.63%, and mean pixel accuracy value of 96.55%.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11349722/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140185849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flipover outperforms dropout in deep learning 在深度学习中,Flipover 优于 Dropout
IF 2.8 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2024-02-22 DOI: 10.1186/s42492-024-00153-y
Yuxuan Liang, Chuang Niu, Pingkun Yan, Ge Wang
{"title":"Flipover outperforms dropout in deep learning","authors":"Yuxuan Liang, Chuang Niu, Pingkun Yan, Ge Wang","doi":"10.1186/s42492-024-00153-y","DOIUrl":"https://doi.org/10.1186/s42492-024-00153-y","url":null,"abstract":"Flipover, an enhanced dropout technique, is introduced to improve the robustness of artificial neural networks. In contrast to dropout, which involves randomly removing certain neurons and their connections, flipover randomly selects neurons and reverts their outputs using a negative multiplier during training. This approach offers stronger regularization than conventional dropout, refining model performance by (1) mitigating overfitting, matching or even exceeding the efficacy of dropout; (2) amplifying robustness to noise; and (3) enhancing resilience against adversarial attacks. Extensive experiments across various neural networks affirm the effectiveness of flipover in deep learning.","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139922238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Multi-task approach based on combined CNN-transformer for efficient segmentation and classification of breast tumors in ultrasound images. 更正:基于组合式 CNN 变换器的多任务方法,用于对超声图像中的乳腺肿瘤进行高效分割和分类。
IF 2.8 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2024-02-09 DOI: 10.1186/s42492-024-00156-9
Jaouad Tagnamas, Hiba Ramadan, Ali Yahyaouy, Hamid Tairi
{"title":"Correction: Multi-task approach based on combined CNN-transformer for efficient segmentation and classification of breast tumors in ultrasound images.","authors":"Jaouad Tagnamas, Hiba Ramadan, Ali Yahyaouy, Hamid Tairi","doi":"10.1186/s42492-024-00156-9","DOIUrl":"10.1186/s42492-024-00156-9","url":null,"abstract":"","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10858012/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139708045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convolutional neural network based data interpretable framework for Alzheimer's treatment planning. 基于卷积神经网络的阿尔茨海默氏症治疗规划数据可解释框架。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2024-02-01 DOI: 10.1186/s42492-024-00154-x
Sazia Parvin, Sonia Farhana Nimmy, Md Sarwar Kamal
{"title":"Convolutional neural network based data interpretable framework for Alzheimer's treatment planning.","authors":"Sazia Parvin, Sonia Farhana Nimmy, Md Sarwar Kamal","doi":"10.1186/s42492-024-00154-x","DOIUrl":"10.1186/s42492-024-00154-x","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a neurological disorder that predominantly affects the brain. In the coming years, it is expected to spread rapidly, with limited progress in diagnostic techniques. Various machine learning (ML) and artificial intelligence (AI) algorithms have been employed to detect AD using single-modality data. However, recent developments in ML have enabled the application of these methods to multiple data sources and input modalities for AD prediction. In this study, we developed a framework that utilizes multimodal data (tabular data, magnetic resonance imaging (MRI) images, and genetic information) to classify AD. As part of the pre-processing phase, we generated a knowledge graph from the tabular data and MRI images. We employed graph neural networks for knowledge graph creation, and region-based convolutional neural network approach for image-to-knowledge graph generation. Additionally, we integrated various explainable AI (XAI) techniques to interpret and elucidate the prediction outcomes derived from multimodal data. Layer-wise relevance propagation was used to explain the layer-wise outcomes in the MRI images. We also incorporated submodular pick local interpretable model-agnostic explanations to interpret the decision-making process based on the tabular data provided. Genetic expression values play a crucial role in AD analysis. We used a graphical gene tree to identify genes associated with the disease. Moreover, a dashboard was designed to display XAI outcomes, enabling experts and medical professionals to easily comprehend the prediction results.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10830981/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139651820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-task approach based on combined CNN-transformer for efficient segmentation and classification of breast tumors in ultrasound images. 基于组合式 CNN 变换器的多任务方法,用于对超声图像中的乳腺肿瘤进行高效分割和分类。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2024-01-26 DOI: 10.1186/s42492-024-00155-w
Jaouad Tagnamas, Hiba Ramadan, Ali Yahyaouy, Hamid Tairi
{"title":"Multi-task approach based on combined CNN-transformer for efficient segmentation and classification of breast tumors in ultrasound images.","authors":"Jaouad Tagnamas, Hiba Ramadan, Ali Yahyaouy, Hamid Tairi","doi":"10.1186/s42492-024-00155-w","DOIUrl":"10.1186/s42492-024-00155-w","url":null,"abstract":"<p><p>Accurate segmentation of breast ultrasound (BUS) images is crucial for early diagnosis and treatment of breast cancer. Further, the task of segmenting lesions in BUS images continues to pose significant challenges due to the limitations of convolutional neural networks (CNNs) in capturing long-range dependencies and obtaining global context information. Existing methods relying solely on CNNs have struggled to address these issues. Recently, ConvNeXts have emerged as a promising architecture for CNNs, while transformers have demonstrated outstanding performance in diverse computer vision tasks, including the analysis of medical images. In this paper, we propose a novel breast lesion segmentation network CS-Net that combines the strengths of ConvNeXt and Swin Transformer models to enhance the performance of the U-Net architecture. Our network operates on BUS images and adopts an end-to-end approach to perform segmentation. To address the limitations of CNNs, we design a hybrid encoder that incorporates modified ConvNeXt convolutions and Swin Transformer. Furthermore, to enhance capturing the spatial and channel attention in feature maps we incorporate the Coordinate Attention Module. Second, we design an Encoder-Decoder Features Fusion Module that facilitates the fusion of low-level features from the encoder with high-level semantic features from the decoder during the image reconstruction. Experimental results demonstrate the superiority of our network over state-of-the-art image segmentation methods for BUS lesions segmentation.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10811315/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139564831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CT-based radiomics: predicting early outcomes after percutaneous transluminal renal angioplasty in patients with severe atherosclerotic renal artery stenosis. 基于 CT 的放射组学:预测严重动脉粥样硬化性肾动脉狭窄患者经皮腔内肾血管成形术后的早期预后。
IF 2.8 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2024-01-12 DOI: 10.1186/s42492-023-00152-5
Jia Fu, Mengjie Fang, Zhiyong Lin, Jianxing Qiu, Min Yang, Jie Tian, Di Dong, Yinghua Zou
{"title":"CT-based radiomics: predicting early outcomes after percutaneous transluminal renal angioplasty in patients with severe atherosclerotic renal artery stenosis.","authors":"Jia Fu, Mengjie Fang, Zhiyong Lin, Jianxing Qiu, Min Yang, Jie Tian, Di Dong, Yinghua Zou","doi":"10.1186/s42492-023-00152-5","DOIUrl":"10.1186/s42492-023-00152-5","url":null,"abstract":"<p><p>This study aimed to comprehensively evaluate non-contrast computed tomography (CT)-based radiomics for predicting early outcomes in patients with severe atherosclerotic renal artery stenosis (ARAS) after percutaneous transluminal renal angioplasty (PTRA). A total of 52 patients were retrospectively recruited, and their clinical characteristics and pretreatment CT images were collected. During a median follow-up period of 3.7 mo, 18 patients were confirmed to have benefited from the treatment, defined as a 20% improvement from baseline in the estimated glomerular filtration rate. A deep learning network trained via self-supervised learning was used to enhance the imaging phenotype characteristics. Radiomics features, comprising 116 handcrafted features and 78 deep learning features, were extracted from the affected renal and perirenal adipose regions. More features from the latter were correlated with early outcomes, as determined by univariate analysis, and were visually represented in radiomics heatmaps and volcano plots. After using consensus clustering and the least absolute shrinkage and selection operator method for feature selection, five machine learning models were evaluated. Logistic regression yielded the highest leave-one-out cross-validation accuracy of 0.780 (95%CI: 0.660-0.880) for the renal signature, while the support vector machine achieved 0.865 (95%CI: 0.769-0.942) for the perirenal adipose signature. SHapley Additive exPlanations was used to visually interpret the prediction mechanism, and a histogram feature and a deep learning feature were identified as the most influential factors for the renal signature and perirenal adipose signature, respectively. Multivariate analysis revealed that both signatures served as independent predictive factors. When combined, they achieved an area under the receiver operating characteristic curve of 0.888 (95%CI: 0.784-0.992), indicating that the imaging phenotypes from both regions complemented each other. In conclusion, non-contrast CT-based radiomics can be leveraged to predict the early outcomes of PTRA, thereby assisting in identifying patients with ARAS suitable for this treatment, with perirenal adipose tissue providing added predictive value.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10784441/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139425625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive feature extraction method for capsule endoscopy images 胶囊内窥镜图像的自适应特征提取方法
IF 2.8 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2023-12-11 DOI: 10.1186/s42492-023-00151-6
Dingchang Wu, Yinghui Wang, Haomiao Ma, Lingyu Ai, Jinlong Yang, Shaojie Zhang, Wei Li
{"title":"Adaptive feature extraction method for capsule endoscopy images","authors":"Dingchang Wu, Yinghui Wang, Haomiao Ma, Lingyu Ai, Jinlong Yang, Shaojie Zhang, Wei Li","doi":"10.1186/s42492-023-00151-6","DOIUrl":"https://doi.org/10.1186/s42492-023-00151-6","url":null,"abstract":"The traditional feature-extraction method of oriented FAST and rotated BRIEF (ORB) detects image features based on a fixed threshold; however, ORB descriptors do not distinguish features well in capsule endoscopy images. Therefore, a new feature detector that uses a new method for setting thresholds, called the adaptive threshold FAST and FREAK in capsule endoscopy images (AFFCEI), is proposed. This method, first constructs an image pyramid and then calculates the thresholds of pixels based on the gray value contrast of all pixels in the local neighborhood of the image, to achieve adaptive image feature extraction in each layer of the pyramid. Subsequently, the features are expressed by the FREAK descriptor, which can enhance the discrimination of the features extracted from the stomach image. Finally, a refined matching is obtained by applying the grid-based motion statistics algorithm to the result of Hamming distance, whereby mismatches are rejected using the RANSAC algorithm. Compared with the ASIFT method, which previously had the best performance, the average running time of AFFCEI was 4/5 that of ASIFT, and the average matching score improved by 5% when tracking features in a moving capsule endoscope.","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138566135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comprehensive integrated analysis of MR and DCE-MR radiomics models for prognostic prediction in nasopharyngeal carcinoma. MR和DCE-MR放射组学模型在鼻咽癌预后预测中的综合综合分析。
IF 2.8 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2023-12-01 DOI: 10.1186/s42492-023-00149-0
Hailin Li, Weiyuan Huang, Siwen Wang, Priya S Balasubramanian, Gang Wu, Mengjie Fang, Xuebin Xie, Jie Zhang, Di Dong, Jie Tian, Feng Chen
{"title":"Comprehensive integrated analysis of MR and DCE-MR radiomics models for prognostic prediction in nasopharyngeal carcinoma.","authors":"Hailin Li, Weiyuan Huang, Siwen Wang, Priya S Balasubramanian, Gang Wu, Mengjie Fang, Xuebin Xie, Jie Zhang, Di Dong, Jie Tian, Feng Chen","doi":"10.1186/s42492-023-00149-0","DOIUrl":"10.1186/s42492-023-00149-0","url":null,"abstract":"<p><p>Although prognostic prediction of nasopharyngeal carcinoma (NPC) remains a pivotal research area, the role of dynamic contrast-enhanced magnetic resonance (DCE-MR) has been less explored. This study aimed to investigate the role of DCR-MR in predicting progression-free survival (PFS) in patients with NPC using magnetic resonance (MR)- and DCE-MR-based radiomic models. A total of 434 patients with two MR scanning sequences were included. The MR- and DCE-MR-based radiomics models were developed based on 289 patients with only MR scanning sequences and 145 patients with four additional pharmacokinetic parameters (volume fraction of extravascular extracellular space (v<sub>e</sub>), volume fraction of plasma space (v<sub>p</sub>), volume transfer constant (K<sup>trans</sup>), and reverse reflux rate constant (k<sub>ep</sub>) of DCE-MR. A combined model integrating MR and DCE-MR was constructed. Utilizing methods such as correlation analysis, least absolute shrinkage and selection operator regression, and multivariate Cox proportional hazards regression, we built the radiomics models. Finally, we calculated the net reclassification index and C-index to evaluate and compare the prognostic performance of the radiomics models. Kaplan-Meier survival curve analysis was performed to investigate the model's ability to stratify risk in patients with NPC. The integration of MR and DCE-MR radiomic features significantly enhanced prognostic prediction performance compared to MR- and DCE-MR-based models, evidenced by a test set C-index of 0.808 vs 0.729 and 0.731, respectively. The combined radiomics model improved net reclassification by 22.9%-52.6% and could significantly stratify the risk levels of patients with NPC (p = 0.036). Furthermore, the MR-based radiomic feature maps achieved similar results to the DCE-MR pharmacokinetic parameters in terms of reflecting the underlying angiogenesis information in NPC. Compared to conventional MR-based radiomics models, the combined radiomics model integrating MR and DCE-MR showed promising results in delivering more accurate prognostic predictions and provided more clinical benefits in quantifying and monitoring phenotypic changes associated with NPC prognosis.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10689317/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138463012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local imperceptible adversarial attacks against human pose estimation networks. 针对人体姿态估计网络的局部难以察觉的对抗攻击。
IF 2.8 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2023-11-21 DOI: 10.1186/s42492-023-00148-1
Fuchang Liu, Shen Zhang, Hao Wang, Caiping Yan, Yongwei Miao
{"title":"Local imperceptible adversarial attacks against human pose estimation networks.","authors":"Fuchang Liu, Shen Zhang, Hao Wang, Caiping Yan, Yongwei Miao","doi":"10.1186/s42492-023-00148-1","DOIUrl":"10.1186/s42492-023-00148-1","url":null,"abstract":"<p><p>Deep neural networks are vulnerable to attacks from adversarial inputs. Corresponding attack research on human pose estimation (HPE), particularly for body joint detection, has been largely unexplored. Transferring classification-based attack methods to body joint regression tasks is not straightforward. Another issue is that the attack effectiveness and imperceptibility contradict each other. To solve these issues, we propose local imperceptible attacks on HPE networks. In particular, we reformulate imperceptible attacks on body joint regression into a constrained maximum allowable attack. Furthermore, we approximate the solution using iterative gradient-based strength refinement and greedy-based pixel selection. Our method crafts effective perceptual adversarial attacks that consider both human perception and attack effectiveness. We conducted a series of imperceptible attacks against state-of-the-art HPE methods, including HigherHRNet, DEKR, and ViTPose. The experimental results demonstrate that the proposed method achieves excellent imperceptibility while maintaining attack effectiveness by significantly reducing the number of perturbed pixels. Approximately 4% of the pixels can achieve sufficient attacks on HPE.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10661673/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138177479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable knowledge graph fact prediction via reinforcement learning. 通过强化学习进行可靠的知识图谱事实预测。
IF 2.8 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2023-11-20 DOI: 10.1186/s42492-023-00150-7
Fangfang Zhou, Jiapeng Mi, Beiwen Zhang, Jingcheng Shi, Ran Zhang, Xiaohui Chen, Ying Zhao, Jian Zhang
{"title":"Reliable knowledge graph fact prediction via reinforcement learning.","authors":"Fangfang Zhou, Jiapeng Mi, Beiwen Zhang, Jingcheng Shi, Ran Zhang, Xiaohui Chen, Ying Zhao, Jian Zhang","doi":"10.1186/s42492-023-00150-7","DOIUrl":"10.1186/s42492-023-00150-7","url":null,"abstract":"<p><p>Knowledge graph (KG) fact prediction aims to complete a KG by determining the truthfulness of predicted triples. Reinforcement learning (RL)-based approaches have been widely used for fact prediction. However, the existing approaches largely suffer from unreliable calculations on rule confidences owing to a limited number of obtained reasoning paths, thereby resulting in unreliable decisions on prediction triples. Hence, we propose a new RL-based approach named EvoPath in this study. EvoPath features a new reward mechanism based on entity heterogeneity, facilitating an agent to obtain effective reasoning paths during random walks. EvoPath also incorporates a new postwalking mechanism to leverage easily overlooked but valuable reasoning paths during RL. Both mechanisms provide sufficient reasoning paths to facilitate the reliable calculations of rule confidences, enabling EvoPath to make precise judgments about the truthfulness of prediction triples. Experiments demonstrate that EvoPath can achieve more accurate fact predictions than existing approaches.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10657918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138048083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信