BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition

Elisa Terumi Rubel Schneider, João Vitor Andrioli de Souza, J. Knafou, Lucas E. S. Oliveira, J. Copara, Yohan Bonescki Gumiel, L. A. F. D. Oliveira, E. Paraiso, D. Teodoro, C. M. Barra
{"title":"BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition","authors":"Elisa Terumi Rubel Schneider, João Vitor Andrioli de Souza, J. Knafou, Lucas E. S. Oliveira, J. Copara, Yohan Bonescki Gumiel, L. A. F. D. Oliveira, E. Paraiso, D. Teodoro, C. M. Barra","doi":"10.18653/v1/2020.clinicalnlp-1.7","DOIUrl":null,"url":null,"abstract":"With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72%, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.","PeriodicalId":216954,"journal":{"name":"Clinical Natural Language Processing Workshop","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"50","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Natural Language Processing Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2020.clinicalnlp-1.7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 50

Abstract

With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72%, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.
用于临床命名实体识别的葡萄牙语神经语言模型
随着电子健康记录数据数量的增加,临床NLP任务与从非结构化临床文本中解锁有价值的信息变得越来越相关。虽然下游NLP任务的表现,如命名实体识别(NER),在英语语料库中,最近通过上下文化语言模型得到了改善,但对低资源语言的临床文本的研究较少。我们的目标是评估葡萄牙语的深度上下文嵌入模型,即BioBERTpt,以支持临床和生物医学NER。我们将在多语言bert模型中编码的学习信息转移到巴西葡萄牙语的临床叙述和生物医学科学论文的语料库中。为了评估BioBERTpt的性能,我们对两个包含临床叙述的带注释的语料库进行了NER实验,并将结果与现有的BERT模型进行了比较。我们的域内模型在f1得分上优于基线模型2.72%,在13个被评估实体中有11个实现了更高的性能。我们证明了用领域文献丰富上下文嵌入模型可以在提高特定NLP任务的性能方面发挥重要作用。迁移学习过程通过减少标记数据的必要性和重新培训整个新模型的需求,增强了葡萄牙生物医学NER模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信