Neural Rankers for Effective Screening Prioritisation in Medical Systematic Review Literature Search

Shuai Wang, Harrisen Scells, B. Koopman, G. Zuccon
{"title":"Neural Rankers for Effective Screening Prioritisation in Medical Systematic Review Literature Search","authors":"Shuai Wang, Harrisen Scells, B. Koopman, G. Zuccon","doi":"10.1145/3572960.3572980","DOIUrl":null,"url":null,"abstract":"Medical systematic reviews typically require assessing all the documents retrieved by a search. The reason is two-fold: the task aims for “total recall”; and documents retrieved using Boolean search are an unordered set, and thus it is unclear how an assessor could examine only a subset. Screening prioritisation is the process of ranking the (unordered) set of retrieved documents, allowing assessors to begin the downstream processes of the systematic review creation earlier, leading to earlier completion of the review, or even avoiding screening documents ranked least relevant. Screening prioritisation requires highly effective ranking methods. Pre-trained language models are state-of-the-art on many IR tasks but have yet to be applied to systematic review screening prioritisation. In this paper, we apply several pre-trained language models to the systematic review document ranking task, both directly and fine-tuned. An empirical analysis compares how effective neural methods compare to traditional methods for this task. We also investigate different types of document representations for neural methods and their impact on ranking performance. Our results show that BERT-based rankers outperform the current state-of-the-art screening prioritisation methods. However, BERT rankers and existing methods can actually be complementary, and thus, further improvements may be achieved if used in conjunction.","PeriodicalId":106265,"journal":{"name":"Proceedings of the 26th Australasian Document Computing Symposium","volume":"86 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 26th Australasian Document Computing Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3572960.3572980","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Medical systematic reviews typically require assessing all the documents retrieved by a search. The reason is two-fold: the task aims for “total recall”; and documents retrieved using Boolean search are an unordered set, and thus it is unclear how an assessor could examine only a subset. Screening prioritisation is the process of ranking the (unordered) set of retrieved documents, allowing assessors to begin the downstream processes of the systematic review creation earlier, leading to earlier completion of the review, or even avoiding screening documents ranked least relevant. Screening prioritisation requires highly effective ranking methods. Pre-trained language models are state-of-the-art on many IR tasks but have yet to be applied to systematic review screening prioritisation. In this paper, we apply several pre-trained language models to the systematic review document ranking task, both directly and fine-tuned. An empirical analysis compares how effective neural methods compare to traditional methods for this task. We also investigate different types of document representations for neural methods and their impact on ranking performance. Our results show that BERT-based rankers outperform the current state-of-the-art screening prioritisation methods. However, BERT rankers and existing methods can actually be complementary, and thus, further improvements may be achieved if used in conjunction.
医学系统评价文献检索中有效筛选优先级的神经排序方法
医学系统审查通常需要评估通过搜索检索到的所有文件。原因有两方面:任务的目标是“完全回忆”;使用布尔搜索检索的文档是一个无序集合,因此不清楚评估器如何只检查一个子集。筛选优先级是对检索文档的(无序的)集合进行排序的过程,允许评估者更早地开始系统审查创建的下游过程,导致更早地完成审查,或者甚至避免筛选排名最不相关的文档。筛选优先级需要非常有效的排序方法。预训练语言模型在许多IR任务中是最先进的,但尚未应用于系统审查筛选优先级。在本文中,我们将几个预训练的语言模型应用于系统评论文档排序任务,包括直接和微调。一项实证分析比较了神经方法与传统方法在此任务中的有效性。我们还研究了神经方法的不同类型的文档表示及其对排名性能的影响。我们的研究结果表明,基于bert的排名优于当前最先进的筛选优先排序方法。然而,BERT排序器和现有的方法实际上是互补的,因此,如果结合使用,可以实现进一步的改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信