MediAlbertina: An European Portuguese medical language model.

IF 7 2区 医学 Q1 BIOLOGY
Computers in biology and medicine Pub Date : 2024-11-01 Epub Date: 2024-10-02 DOI:10.1016/j.compbiomed.2024.109233
Miguel Nunes, João Boné, João C Ferreira, Pedro Chaves, Luis B Elvas
{"title":"MediAlbertina: An European Portuguese medical language model.","authors":"Miguel Nunes, João Boné, João C Ferreira, Pedro Chaves, Luis B Elvas","doi":"10.1016/j.compbiomed.2024.109233","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Patient medical information often exists in unstructured text containing abbreviations and acronyms deemed essential to conserve time and space but posing challenges for automated interpretation. Leveraging the efficacy of Transformers in natural language processing, our objective was to use the knowledge acquired by a language model and continue its pre-training to develop an European Portuguese (PT-PT) healthcare-domain language model.</p><p><strong>Methods: </strong>After carrying out a filtering process, Albertina PT-PT 900M was selected as our base language model, and we continued its pre-training using more than 2.6 million electronic medical records from Portugal's largest public hospital. MediAlbertina 900M has been created through domain adaptation on this data using masked language modelling.</p><p><strong>Results: </strong>The comparison with our baseline was made through the usage of both perplexity, which decreased from about 20 to 1.6 values, and the fine-tuning and evaluation of information extraction models such as Named Entity Recognition and Assertion Status. MediAlbertina PT-PT outperformed Albertina PT-PT in both tasks by 4-6% on recall and f1-score.</p><p><strong>Conclusions: </strong>This study contributes with the first publicly available medical language model trained with PT-PT data. It underscores the efficacy of domain adaptation and offers a contribution to the scientific community in overcoming obstacles of non-English languages. With MediAlbertina, further steps can be taken to assist physicians, in creating decision support systems or building medical timelines in order to perform profiling, by fine-tuning MediAlbertina for PT- PT medical tasks.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"182 ","pages":"109233"},"PeriodicalIF":7.0000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in biology and medicine","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1016/j.compbiomed.2024.109233","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/10/2 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Patient medical information often exists in unstructured text containing abbreviations and acronyms deemed essential to conserve time and space but posing challenges for automated interpretation. Leveraging the efficacy of Transformers in natural language processing, our objective was to use the knowledge acquired by a language model and continue its pre-training to develop an European Portuguese (PT-PT) healthcare-domain language model.

Methods: After carrying out a filtering process, Albertina PT-PT 900M was selected as our base language model, and we continued its pre-training using more than 2.6 million electronic medical records from Portugal's largest public hospital. MediAlbertina 900M has been created through domain adaptation on this data using masked language modelling.

Results: The comparison with our baseline was made through the usage of both perplexity, which decreased from about 20 to 1.6 values, and the fine-tuning and evaluation of information extraction models such as Named Entity Recognition and Assertion Status. MediAlbertina PT-PT outperformed Albertina PT-PT in both tasks by 4-6% on recall and f1-score.

Conclusions: This study contributes with the first publicly available medical language model trained with PT-PT data. It underscores the efficacy of domain adaptation and offers a contribution to the scientific community in overcoming obstacles of non-English languages. With MediAlbertina, further steps can be taken to assist physicians, in creating decision support systems or building medical timelines in order to perform profiling, by fine-tuning MediAlbertina for PT- PT medical tasks.

MediAlbertina:欧洲葡萄牙语医学语言模型。
背景:患者的医疗信息通常以非结构化文本的形式存在,其中包含的缩写和首字母缩略词被认为对节省时间和空间至关重要,但却给自动解释带来了挑战。利用 Transformers 在自然语言处理方面的功效,我们的目标是利用语言模型获得的知识并继续对其进行预训练,以开发欧洲葡萄牙语(PT-PT)医疗保健领域语言模型:经过筛选,我们选择了阿尔贝蒂娜 PT-PT 900M 作为基础语言模型,并利用葡萄牙最大公立医院的 260 多万份电子病历继续对其进行预训练。MediAlbertina 900M 是在这些数据的基础上,利用掩码语言建模技术进行领域适应性调整后创建的:与我们的基线进行比较时,我们使用了复杂度(从约 20 个值下降到 1.6 个值)以及信息提取模型(如命名实体识别和断言状态)的微调和评估。在这两项任务中,MediAlbertina PT-PT 的召回率和 f1 分数都比 Albertina PT-PT 高出 4-6% :本研究首次公开了使用 PT-PT 数据训练的医学语言模型。它强调了领域适应的有效性,并为科学界克服非英语语言的障碍做出了贡献。有了 MediAlbertina,就可以采取进一步措施,通过针对 PT-PT 医疗任务对 MediAlbertina 进行微调,协助医生创建决策支持系统或建立医疗时间表,以便进行特征分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers in biology and medicine
Computers in biology and medicine 工程技术-工程:生物医学
CiteScore
11.70
自引率
10.40%
发文量
1086
审稿时长
74 days
期刊介绍: Computers in Biology and Medicine is an international forum for sharing groundbreaking advancements in the use of computers in bioscience and medicine. This journal serves as a medium for communicating essential research, instruction, ideas, and information regarding the rapidly evolving field of computer applications in these domains. By encouraging the exchange of knowledge, we aim to facilitate progress and innovation in the utilization of computers in biology and medicine.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信