基于多语言模型的药理学重要信息文本关系提取

A. Selivanov, A. Gryaznov, R. Rybka, A. Sboev, S. Sboeva, Yuliya Klueva
{"title":"基于多语言模型的药理学重要信息文本关系提取","authors":"A. Selivanov, A. Gryaznov, R. Rybka, A. Sboev, S. Sboeva, Yuliya Klueva","doi":"10.22323/1.429.0014","DOIUrl":null,"url":null,"abstract":"In this paper we estimate the accuracy of the relation extraction from texts containing pharmacologically significant information on base of the expanded version of RDRS corpus, which contains texts of internet reviews on medications in Russian. The accuracy of relation extraction is estimated and compared for two multilingual language models: XLM-RoBERTa-large and XLM-RoBERTa-large-sag. Earlier research proved XLM-RoBERTa-large-sag to be the most efficient language model for the previous version of the RDRS dataset for relation extraction using a ground-truth named entities annotation. In the current work we use two-step relation extraction approach: automated named entity recognition and extraction of relations between predicted entities. The implemented approach has given an opportunity to estimate the accuracy of the proposed solution to the relation extraction problem, as well as to estimate the accuracy at each step of the analysis. As a result, it is shown, that multilingual XLM-RoBERTa-large-sag model achieves relation extraction macro-averaged f1-score equals to 86.4% on the ground-truth named entities, 60.1% on the predicted named entities on the new version of the RDRS corpus contained more than 3800 annotated texts. Consequently, implemented approach based on the","PeriodicalId":262901,"journal":{"name":"Proceedings of The 6th International Workshop on Deep Learning in Computational Physics — PoS(DLCP2022)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Relation Extraction from Texts Containing Pharmacologically Significant Information on base of Multilingual Language Models\",\"authors\":\"A. Selivanov, A. Gryaznov, R. Rybka, A. Sboev, S. Sboeva, Yuliya Klueva\",\"doi\":\"10.22323/1.429.0014\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we estimate the accuracy of the relation extraction from texts containing pharmacologically significant information on base of the expanded version of RDRS corpus, which contains texts of internet reviews on medications in Russian. The accuracy of relation extraction is estimated and compared for two multilingual language models: XLM-RoBERTa-large and XLM-RoBERTa-large-sag. Earlier research proved XLM-RoBERTa-large-sag to be the most efficient language model for the previous version of the RDRS dataset for relation extraction using a ground-truth named entities annotation. In the current work we use two-step relation extraction approach: automated named entity recognition and extraction of relations between predicted entities. The implemented approach has given an opportunity to estimate the accuracy of the proposed solution to the relation extraction problem, as well as to estimate the accuracy at each step of the analysis. As a result, it is shown, that multilingual XLM-RoBERTa-large-sag model achieves relation extraction macro-averaged f1-score equals to 86.4% on the ground-truth named entities, 60.1% on the predicted named entities on the new version of the RDRS corpus contained more than 3800 annotated texts. Consequently, implemented approach based on the\",\"PeriodicalId\":262901,\"journal\":{\"name\":\"Proceedings of The 6th International Workshop on Deep Learning in Computational Physics — PoS(DLCP2022)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of The 6th International Workshop on Deep Learning in Computational Physics — PoS(DLCP2022)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.22323/1.429.0014\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of The 6th International Workshop on Deep Learning in Computational Physics — PoS(DLCP2022)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.22323/1.429.0014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在本文中,我们估计了从包含药理学重要信息的文本中提取关系的准确性,该文本基于RDRS语料库的扩展版本,其中包含俄语药物的互联网评论文本。对XLM-RoBERTa-large和XLM-RoBERTa-large-sag两种多语言模型的关系提取精度进行了估计和比较。早期的研究证明,XLM-RoBERTa-large-sag是使用ground-truth命名实体注释进行关系提取的前一版本RDRS数据集最有效的语言模型。在目前的工作中,我们使用两步关系提取方法:自动命名实体识别和预测实体之间关系的提取。实现的方法提供了一个机会来估计所提出的关系提取问题的解决方案的准确性,以及估计分析的每个步骤的准确性。结果表明,在包含3800多个注释文本的新版RDRS语料库上,多语言XLM-RoBERTa-large-sag模型在基础真实命名实体上实现了宏观平均f1-score,在预测命名实体上达到了60.1%。因此,实现的方法基于
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Relation Extraction from Texts Containing Pharmacologically Significant Information on base of Multilingual Language Models
In this paper we estimate the accuracy of the relation extraction from texts containing pharmacologically significant information on base of the expanded version of RDRS corpus, which contains texts of internet reviews on medications in Russian. The accuracy of relation extraction is estimated and compared for two multilingual language models: XLM-RoBERTa-large and XLM-RoBERTa-large-sag. Earlier research proved XLM-RoBERTa-large-sag to be the most efficient language model for the previous version of the RDRS dataset for relation extraction using a ground-truth named entities annotation. In the current work we use two-step relation extraction approach: automated named entity recognition and extraction of relations between predicted entities. The implemented approach has given an opportunity to estimate the accuracy of the proposed solution to the relation extraction problem, as well as to estimate the accuracy at each step of the analysis. As a result, it is shown, that multilingual XLM-RoBERTa-large-sag model achieves relation extraction macro-averaged f1-score equals to 86.4% on the ground-truth named entities, 60.1% on the predicted named entities on the new version of the RDRS corpus contained more than 3800 annotated texts. Consequently, implemented approach based on the
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信