Journal of Biomedical Informatics最新文献

筛选
英文 中文
Promoting smartphone-based keratitis screening using meta-learning: A multicenter study 利用元学习推广基于智能手机的角膜炎筛查:一项多中心研究。
IF 4 2区 医学
Journal of Biomedical Informatics Pub Date : 2024-09-01 DOI: 10.1016/j.jbi.2024.104722
{"title":"Promoting smartphone-based keratitis screening using meta-learning: A multicenter study","authors":"","doi":"10.1016/j.jbi.2024.104722","DOIUrl":"10.1016/j.jbi.2024.104722","url":null,"abstract":"<div><h3>Objective</h3><p>Keratitis is the primary cause of corneal blindness worldwide. Prompt identification and referral of patients with keratitis are fundamental measures to improve patient prognosis. Although deep learning can assist ophthalmologists in automatically detecting keratitis through a slit lamp camera, remote and underserved areas often lack this professional equipment. Smartphones, a widely available device, have recently been found to have potential in keratitis screening. However, given the limited data available from smartphones, employing traditional deep learning algorithms to construct a robust intelligent system presents a significant challenge. This study aimed to propose a meta-learning framework, cosine nearest centroid-based metric learning (CNCML), for developing a smartphone-based keratitis screening model in the case of insufficient smartphone data by leveraging the prior knowledge acquired from slit-lamp photographs.</p></div><div><h3>Methods</h3><p>We developed and assessed CNCML based on 13,009 slit-lamp photographs and 4,075 smartphone photographs that were obtained from 3 independent clinical centers. To mimic real-world scenarios with various degrees of sample scarcity, we used training sets of different sizes (0 to 20 photographs per class) from the HUAWEI smartphone to train CNCML. We evaluated the performance of CNCML not only on an internal test dataset but also on two external datasets that were collected by two different brands of smartphones (VIVO and XIAOMI) in another clinical center. Furthermore, we compared the performance of CNCML with that of traditional deep learning models on these smartphone datasets. The accuracy and macro-average area under the curve (macro-AUC) were utilized to evaluate the performance of models.</p></div><div><h3>Results</h3><p>With merely 15 smartphone photographs per class used for training, CNCML reached accuracies of 84.59%, 83.15%, and 89.99% on three smartphone datasets, with corresponding macro-AUCs of 0.96, 0.95, and 0.98, respectively. The accuracies of CNCML on these datasets were 0.56% to 9.65% higher than those of the most competitive traditional deep learning models.</p></div><div><h3>Conclusions</h3><p>CNCML exhibited fast learning capabilities, attaining remarkable performance with a small number of training samples. This approach presents a potential solution for transitioning intelligent keratitis detection from professional devices (e.g., slit-lamp cameras) to more ubiquitous devices (e.g., smartphones), making keratitis screening more convenient and effective.</p></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142145736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating graph convolutional networks to enhance prompt learning for biomedical relation extraction 整合图卷积网络,加强生物医学关系提取的及时学习
IF 4 2区 医学
Journal of Biomedical Informatics Pub Date : 2024-09-01 DOI: 10.1016/j.jbi.2024.104717
{"title":"Integrating graph convolutional networks to enhance prompt learning for biomedical relation extraction","authors":"","doi":"10.1016/j.jbi.2024.104717","DOIUrl":"10.1016/j.jbi.2024.104717","url":null,"abstract":"<div><h3>Background and Objective:</h3><p>Biomedical relation extraction aims to reveal the relation between entities in medical texts. Currently, the relation extraction models that have attracted much attention are mainly to fine-tune the pre-trained language models (PLMs) or add template prompt learning, which also limits the ability of the model to deal with grammatical dependencies. Graph convolutional networks (GCNs) can play an important role in processing syntactic dependencies in biomedical texts.</p></div><div><h3>Methods:</h3><p>In this work, we propose a biomedical relation extraction model that fuses GCNs enhanced prompt learning to handle limitations in syntactic dependencies and achieve good performance. Specifically, we propose a model that combines prompt learning with GCNs for relation extraction, by integrating the syntactic dependency information analyzed by GCNs into the prompt learning model, by predicting the correspondence with [MASK] tokens labels for relation extraction.</p></div><div><h3>Results:</h3><p>Our model achieved F1 scores of 85.57%, 80.15%, 95.10%, and 84.11% in the biomedical relation extraction datasets GAD, ChemProt, PGR, and DDI, respectively, all of which outperform some existing baseline models.</p></div><div><h3>Conclusions:</h3><p>In this paper, we propose enhancing prompt learning through GCNs, integrating syntactic information into biomedical relation extraction tasks. Experimental results show that our proposed method achieves excellent performance in the biomedical relation extraction task.</p></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAFT-SO: A novel multi-atlas fusion template based on spatial overlap for ASD diagnosis MAFT-SO:基于空间重叠的新型多图集融合模板,用于 ASD 诊断。
IF 4 2区 医学
Journal of Biomedical Informatics Pub Date : 2024-09-01 DOI: 10.1016/j.jbi.2024.104714
{"title":"MAFT-SO: A novel multi-atlas fusion template based on spatial overlap for ASD diagnosis","authors":"","doi":"10.1016/j.jbi.2024.104714","DOIUrl":"10.1016/j.jbi.2024.104714","url":null,"abstract":"<div><p>Autism spectrum disorder (ASD) is a common neurological condition. Early diagnosis and treatment are essential for enhancing the life quality of individuals with ASD. However, most existing studies either focus solely on the brain networks of subjects within a single atlas or merely employ simple matrix concatenation to represent the fusion of multi-atlas. These approaches neglected the natural spatial overlap that exists between brain regions across multi-atlas and did not fully capture the comprehensive information of brain regions under different atlases. To tackle this weakness, in this paper, we propose a novel multi-atlas fusion template based on spatial overlap degree of brain regions, which aims to obtain a comprehensive representation of brain networks. Specifically, we formally define a measurement of the spatial overlap among brain regions across different atlases, named spatial overlap degree. Then, we fuse the multi-atlas to obtain brain networks of each subject based on spatial overlap. Finally, the GCN is used to perform the final classification. The experimental results on Autism Brain Imaging Data Exchange (ABIDE) demonstrate that our proposed method achieved an accuracy of 0.757. Overall, our method outperforms SOTA methods in ASD/TC classification.</p></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142072869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A conditional multi-label model to improve prediction of a rare outcome: An illustration predicting autism diagnosis 改善罕见结果预测的条件多标签模型:自闭症诊断预测示例。
IF 4 2区 医学
Journal of Biomedical Informatics Pub Date : 2024-09-01 DOI: 10.1016/j.jbi.2024.104711
{"title":"A conditional multi-label model to improve prediction of a rare outcome: An illustration predicting autism diagnosis","authors":"","doi":"10.1016/j.jbi.2024.104711","DOIUrl":"10.1016/j.jbi.2024.104711","url":null,"abstract":"<div><h3>Objective</h3><p>This study aimed to develop a novel approach using routinely collected electronic health records (EHRs) data to improve the prediction of a rare event. We illustrated this using an example of improving early prediction of an autism diagnosis, given its low prevalence, by leveraging correlations between autism and other neurodevelopmental conditions (NDCs).</p></div><div><h3>Methods</h3><p>To achieve this, we introduced a conditional multi-label model by merging conditional learning and multi-label methodologies. The conditional learning approach breaks a hard task into more manageable pieces in each stage, and the multi-label approach utilizes information from related neurodevelopmental conditions to learn predictive latent features. The study involved forecasting autism diagnosis by age 5.5 years, utilizing data from the first 18 months of life, and the analysis of feature importance correlations to explore the alignment within the feature space across different conditions.</p></div><div><h3>Results</h3><p>Upon analysis of health records from 18,156 children, we are able to generate a model that predicts a future autism diagnosis with moderate performance (AUROC=0.76). The proposed conditional multi-label method significantly improves predictive performance with an AUROC of 0.80 (<em>p</em> &lt; 0.001). Further examination shows that both the conditional and multi-label approach alone provided marginal lift to the model performance compared to a one-stage one-label approach. We also demonstrated the generalizability and applicability of this method using simulated data with high correlation between feature vectors for different labels.</p></div><div><h3>Conclusion</h3><p>Our findings underscore the effectiveness of the developed conditional multi-label model for early prediction of an autism diagnosis. The study introduces a versatile strategy applicable to prediction tasks involving limited target populations but sharing underlying features or etiology among related groups.</p></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142055705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing gait dysfunction severity in Parkinson’s Disease using 2-Stream Spatial–Temporal Neural Network 利用双流时空神经网络评估帕金森病步态功能障碍的严重程度
IF 4 2区 医学
Journal of Biomedical Informatics Pub Date : 2024-09-01 DOI: 10.1016/j.jbi.2024.104679
{"title":"Assessing gait dysfunction severity in Parkinson’s Disease using 2-Stream Spatial–Temporal Neural Network","authors":"","doi":"10.1016/j.jbi.2024.104679","DOIUrl":"10.1016/j.jbi.2024.104679","url":null,"abstract":"<div><p>Parkinson’s Disease (PD), a neurodegenerative disorder, significantly impacts the quality of life for millions of people worldwide. PD primarily impacts dopaminergic neurons in the brain’s substantia nigra, resulting in dopamine deficiency and gait impairments such as bradykinesia and rigidity. Currently, several well-established tools, such as the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) and Hoehn and Yahr (H&amp;Y) Scale, are used for evaluating gait dysfunction in PD. While insightful, these methods are subjective, time-consuming, and often ineffective in early-stage diagnosis. Other methods using specialized sensors and equipment to measure movement disorders are cumbersome and expensive, limiting their accessibility. This study introduces a hierarchical approach to evaluating gait dysfunction in PD through videos. The novel 2-Stream Spatial–Temporal Neural Network (2S-STNN) leverages the spatial–temporal features from the skeleton and silhouette streams for PD classification. This approach achieves an accuracy rate of 89% and outperforms other state-of-the-art models. The study also employs saliency values to highlight critical body regions that significantly influence model decisions and are severely affected by the disease. For a more detailed analysis, the study investigates 21 specific gait attributes for a nuanced quantification of gait disorders. Parameters such as walking pace, step length, and neck forward angle are found to be strongly correlated with PD gait severity categories. This approach offers a comprehensive and convenient solution for PD management in clinical settings, enabling patients to receive a more precise evaluation and monitoring of their gait impairments.</p></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141457162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive dual-stream contrastive learning for radiology report generation 用于生成放射学报告的交互式双流对比学习
IF 4 2区 医学
Journal of Biomedical Informatics Pub Date : 2024-09-01 DOI: 10.1016/j.jbi.2024.104718
{"title":"Interactive dual-stream contrastive learning for radiology report generation","authors":"","doi":"10.1016/j.jbi.2024.104718","DOIUrl":"10.1016/j.jbi.2024.104718","url":null,"abstract":"<div><p>Radiology report generation automates diagnostic narrative synthesis from medical imaging data. Current report generation methods primarily employ knowledge graphs for image enhancement, neglecting the interpretability and guiding function of the knowledge graphs themselves. Additionally, few approaches leverage the stable modal alignment information from multimodal pre-trained models to facilitate the generation of radiology reports. We propose the Terms-Guided Radiology Report Generation (TGR), a simple and practical model for generating reports guided primarily by anatomical terms. Specifically, we utilize a dual-stream visual feature extraction module comprised of detail extraction module and a frozen multimodal pre-trained model to separately extract visual detail features and semantic features. Furthermore, a Visual Enhancement Module (VEM) is proposed to further enrich the visual features, thereby facilitating the generation of a list of anatomical terms. We integrate anatomical terms with image features and proceed to engage contrastive learning with frozen text embeddings, utilizing the stable feature space from these embeddings to boost modal alignment capabilities further. Our model incorporates the capability for manual input, enabling it to generate a list of organs for specifically focused abnormal areas or to produce more accurate single-sentence descriptions based on selected anatomical terms. Comprehensive experiments demonstrate the effectiveness of our method in report generation tasks, our TGR-S model reduces training parameters by 38.9% while performing comparably to current state-of-the-art models, and our TGR-B model exceeds the best baseline models across multiple metrics.</p></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSGU-CD: A combined semantic and structural information graph U-shaped network for document-level Chemical-Disease interaction extraction SSGU-CD:用于文档级化学-疾病交互提取的语义和结构信息图 U 型组合网络。
IF 4 2区 医学
Journal of Biomedical Informatics Pub Date : 2024-09-01 DOI: 10.1016/j.jbi.2024.104719
{"title":"SSGU-CD: A combined semantic and structural information graph U-shaped network for document-level Chemical-Disease interaction extraction","authors":"","doi":"10.1016/j.jbi.2024.104719","DOIUrl":"10.1016/j.jbi.2024.104719","url":null,"abstract":"<div><p>Document-level interaction extraction for Chemical-Disease is aimed at inferring the interaction relations between chemical entities and disease entities across multiple sentences. Compared with sentence-level relation extraction, document-level relation extraction can capture the associations between different entities throughout the entire document, which is found to be more practical for biomedical text information. However, current biomedical extraction methods mainly concentrate on sentence-level relation extraction, making it difficult to access the rich structural information contained in documents in practical application scenarios. We put forward SSGU-CD, a combined <strong><u>S</u></strong>emantic and <strong><u>S</u></strong>tructural information <strong><u>G</u></strong>raph <strong><u>U</u></strong>-shaped network for document-level <strong><u>C</u></strong>hemical-<strong><u>D</u></strong>isease interaction extraction. This framework effectively stores document semantic and structure information as graphs and can fuse the original context information of documents. Using the framework, we propose a balanced combination of cross-entropy loss function to facilitate collaborative optimization among models with the aim of enhancing the ability to extract Chemical-Disease interaction relations. We evaluated SSGU-CD on the document-level relation extraction dataset CDR and BioRED, and the results demonstrate that the framework can significantly improve the extraction performance.</p></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1532046424001370/pdfft?md5=ccbd03895ffdd2c9164f4a506fad5a18&pid=1-s2.0-S1532046424001370-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142107838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MolCFL: A personalized and privacy-preserving drug discovery framework based on generative clustered federated learning MolCFL:基于生成聚类联合学习的个性化和保护隐私的药物发现框架。
IF 4 2区 医学
Journal of Biomedical Informatics Pub Date : 2024-09-01 DOI: 10.1016/j.jbi.2024.104712
{"title":"MolCFL: A personalized and privacy-preserving drug discovery framework based on generative clustered federated learning","authors":"","doi":"10.1016/j.jbi.2024.104712","DOIUrl":"10.1016/j.jbi.2024.104712","url":null,"abstract":"<div><p>In today’s era of rapid development of large models, the traditional drug development process is undergoing a profound transformation. The vast demand for data and consumption of computational resources are making independent drug discovery increasingly difficult. By integrating federated learning technology into the drug discovery field, we have found a solution that both protects privacy and shares computational power. However, the differences in data held by various pharmaceutical institutions and the diversity in drug design objectives have exacerbated the issue of data heterogeneity, making traditional federated learning consensus models unable to meet the personalized needs of all parties. In this study, we introduce and evaluate an innovative drug discovery framework, MolCFL, which utilizes a multi-layer perceptron (MLP) as the generator and a graph convolutional network (GCN) as the discriminator in a generative adversarial network (GAN). By learning the graph structure of molecules, it generates new molecules in a highly personalized manner and then optimizes the learning process by clustering federated learning, grouping compound data with high similarity. MolCFL not only enhances the model’s ability to protect privacy but also significantly improves the efficiency and personalization of molecular design. MolCFL exhibits superior performance when handling non-independently and identically distributed data compared to traditional models. Experimental results show that the framework demonstrates outstanding performance on two benchmark datasets, with the generated new molecules achieving over 90% in Uniqueness and close to 100% in Novelty. MolCFL not only improves the quality and efficiency of drug molecule design but also, through its highly customized clustered federated learning environment, promotes collaboration and specialization in the drug discovery process while ensuring data privacy. These features make MolCFL a powerful tool suitable for addressing the various challenges faced in the modern drug research and development field.</p></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142055706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Chinese biomedical text mining with community challenges 以社区挑战推进中文生物医学文本挖掘。
IF 4 2区 医学
Journal of Biomedical Informatics Pub Date : 2024-09-01 DOI: 10.1016/j.jbi.2024.104716
{"title":"Advancing Chinese biomedical text mining with community challenges","authors":"","doi":"10.1016/j.jbi.2024.104716","DOIUrl":"10.1016/j.jbi.2024.104716","url":null,"abstract":"<div><h3>Objective</h3><p>This study aims to review the recent advances in community challenges for biomedical text mining in China.</p></div><div><h3>Methods</h3><p>We collected information of evaluation tasks released in community challenges of biomedical text mining, including task description, dataset description, data source, task type and related links. A systematic summary and comparative analysis were conducted on various biomedical natural language processing tasks, such as named entity recognition, entity normalization, attribute extraction, relation extraction, event extraction, text classification, text similarity, knowledge graph construction, question answering, text generation, and large language model evaluation.</p></div><div><h3>Results</h3><p>We identified 39 evaluation tasks from 6 community challenges that spanned from 2017 to 2023. Our analysis revealed the diverse range of evaluation task types and data sources in biomedical text mining. We explored the potential clinical applications of these community challenge tasks from a translational biomedical informatics perspective. We compared with their English counterparts, and discussed the contributions, limitations, lessons and guidelines of these community challenges, while highlighting future directions in the era of large language models.</p></div><div><h3>Conclusion</h3><p>Community challenge evaluation competitions have played a crucial role in promoting technology innovation and fostering interdisciplinary collaboration in the field of biomedical text mining. These challenges provide valuable platforms for researchers to develop state-of-the-art solutions.</p></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1532046424001345/pdfft?md5=90f19a6b5c337cb24358bf3c1497f985&pid=1-s2.0-S1532046424001345-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142093211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BGformer: An improved Informer model to enhance blood glucose prediction BGformer:改进的 Informer 模型可提高血糖预测能力
IF 4 2区 医学
Journal of Biomedical Informatics Pub Date : 2024-08-26 DOI: 10.1016/j.jbi.2024.104715
{"title":"BGformer: An improved Informer model to enhance blood glucose prediction","authors":"","doi":"10.1016/j.jbi.2024.104715","DOIUrl":"10.1016/j.jbi.2024.104715","url":null,"abstract":"<div><p>Accurately predicting blood glucose levels is crucial in diabetes management to mitigate patients’ risk of complications. However, blood glucose values exhibit instability, and existing prediction methods often struggle to capture their volatile nature, leading to inaccurate trend forecasts. To address these challenges, we propose a novel blood glucose level prediction model based on the Informer architecture: BGformer. Our model introduces a feature enhancement module and a microscale overlapping concerns mechanism. The feature enhancement module integrates periodic and trend feature extractors, enhancing the model’s ability to capture relevant information from the data. By extending the feature extraction capacity of time series data, it provides richer feature representations for analysis. Meanwhile, the microscale overlapping concerns mechanism adopts a window-based strategy, computing attention scores only within specific windows. This approach reduces computational complexity while enhancing the model’s capacity to capture local temporal dependencies. Furthermore, we introduce a dual attention enhancement module to augment the model’s expressive capability. Through prediction experiments on blood glucose values from sixteen diabetic patients, our model outperformed eight benchmark models in terms of both MAE and RMSE metrics for future 60-minute and 90-minute predictions. Our proposed scheme significantly improves the model’s dependency-capturing ability, resulting in more accurate blood glucose level predictions.</p></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142089056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信