Enhancing Biomedical Text Summarization and Question-Answering: On the Utility of Domain-Specific Pre-Training

Dima Galat, Marian-Andrei Rizoiu
{"title":"Enhancing Biomedical Text Summarization and Question-Answering: On the Utility of Domain-Specific Pre-Training","authors":"Dima Galat, Marian-Andrei Rizoiu","doi":"10.48550/arXiv.2307.04412","DOIUrl":null,"url":null,"abstract":"Biomedical summarization requires large datasets to train for text generation. We show that while transfer learning offers a viable option for addressing this challenge, an in-domain pre-training does not always offer advantages in a BioASQ summarization task. We identify a suitable model architecture and use it to show a benefit of a general-domain pre-training followed by a task-specific fine-tuning in the context of a BioASQ summarization task, leading to a novel three-step fine-tuning approach that works with only a thousand in-domain examples. Our results indicate that a Large Language Model without domain-specific pre-training can have a significant edge in some domain-specific biomedical text generation tasks.","PeriodicalId":232729,"journal":{"name":"Conference and Labs of the Evaluation Forum","volume":"126 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference and Labs of the Evaluation Forum","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2307.04412","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Biomedical summarization requires large datasets to train for text generation. We show that while transfer learning offers a viable option for addressing this challenge, an in-domain pre-training does not always offer advantages in a BioASQ summarization task. We identify a suitable model architecture and use it to show a benefit of a general-domain pre-training followed by a task-specific fine-tuning in the context of a BioASQ summarization task, leading to a novel three-step fine-tuning approach that works with only a thousand in-domain examples. Our results indicate that a Large Language Model without domain-specific pre-training can have a significant edge in some domain-specific biomedical text generation tasks.
增强生物医学文本摘要与问答:基于特定领域预训练的应用
生物医学摘要需要大量的数据集来训练文本生成。我们表明,虽然迁移学习为解决这一挑战提供了一个可行的选择,但域内预训练并不总是在BioASQ总结任务中提供优势。我们确定了一个合适的模型架构,并使用它来显示一般领域预训练的好处,然后在BioASQ总结任务的上下文中进行特定于任务的微调,从而产生了一种新的三步微调方法,该方法仅适用于一千个域内示例。我们的研究结果表明,没有特定领域预训练的大型语言模型在某些特定领域的生物医学文本生成任务中具有显著的优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信