Information Extraction from Swedish Medical Prescriptions with Sig-Transformer Encoder

John N. Pougue Biyong, Bo Wang, Terry Lyons, A. Nevado-Holgado
{"title":"Information Extraction from Swedish Medical Prescriptions with Sig-Transformer Encoder","authors":"John N. Pougue Biyong, Bo Wang, Terry Lyons, A. Nevado-Holgado","doi":"10.18653/v1/2020.clinicalnlp-1.5","DOIUrl":null,"url":null,"abstract":"Relying on large pretrained language models such as Bidirectional Encoder Representations from Transformers (BERT) for encoding and adding a simple prediction layer has led to impressive performance in many clinical natural language processing (NLP) tasks. In this work, we present a novel extension to the Transformer architecture, by incorporating signature transform with the self-attention model. This architecture is added between embedding and prediction layers. Experiments on a new Swedish prescription data show the proposed architecture to be superior in two of the three information extraction tasks, comparing to baseline models. Finally, we evaluate two different embedding approaches between applying Multilingual BERT and translating the Swedish text to English then encode with a BERT model pretrained on clinical notes.","PeriodicalId":216954,"journal":{"name":"Clinical Natural Language Processing Workshop","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Natural Language Processing Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2020.clinicalnlp-1.5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Relying on large pretrained language models such as Bidirectional Encoder Representations from Transformers (BERT) for encoding and adding a simple prediction layer has led to impressive performance in many clinical natural language processing (NLP) tasks. In this work, we present a novel extension to the Transformer architecture, by incorporating signature transform with the self-attention model. This architecture is added between embedding and prediction layers. Experiments on a new Swedish prescription data show the proposed architecture to be superior in two of the three information extraction tasks, comparing to baseline models. Finally, we evaluate two different embedding approaches between applying Multilingual BERT and translating the Swedish text to English then encode with a BERT model pretrained on clinical notes.
基于符号转换器编码器的瑞典处方信息提取
依赖于大型预训练语言模型,如来自变形器的双向编码器表示(BERT)进行编码并添加一个简单的预测层,在许多临床自然语言处理(NLP)任务中取得了令人印象深刻的表现。在这项工作中,我们通过将签名转换与自关注模型相结合,提出了对Transformer体系结构的一种新的扩展。该架构被添加到嵌入层和预测层之间。在一个新的瑞典处方数据上的实验表明,与基线模型相比,所提出的架构在三个信息提取任务中的两个方面都优于基线模型。最后,我们评估了两种不同的嵌入方法:应用多语言BERT和将瑞典语文本翻译成英语,然后使用临床记录预训练的BERT模型进行编码。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信