Décision

C. Pacific
{"title":"Décision","authors":"C. Pacific","doi":"10.3917/arsi.forma.2012.01.0143","DOIUrl":null,"url":null,"abstract":"Background: The practice of evidence-based medicine (EBM) requires clinicians to integrate their expertise with the latest scientific research. But this is becoming increasingly difficult with the growing numbers of published articles. There is a clear need for better tools to improve clinician's ability to search the primary literature. Randomized clinical trials (RCTs) are the most reliable source of evidence documenting the efficacy of treatment options. This paper describes the retrieval of key sentences from abstracts of RCTs as a step towards helping users find relevant facts about the experimental design of clinical studies. Method: Using Conditional Random Fields (CRFs), a popular and successful method for natural language processing problems, sentences referring to Intervention, Participants and Outcome Measures are automatically categorized. This is done by extending a previous approach for labeling sentences in an abstract for general categories associated with scientific argumentation or rhetorical roles: Aim, Method, Results and Conclusion. Methods are tested on several corpora of RCT abstracts. First structured abstracts with headings specifically indicating Intervention , Participant and Outcome Measures are used. Also a manually annotated corpus of structured and unstructured abstracts is prepared for testing a classifier that identifies sentences belonging to each category. Results: Using CRFs, sentences can be labeled for the four rhetorical roles with F -scores from 0.93–0.98. This outperforms the use of Support Vector Machines. Furthermore, sentences can be automatically labeled for Intervention , Participant and Outcome Measures , in unstructured and structured abstracts where the section headings do not specifically indicate these three topics. F - scores of up to 0.83 and 0.84 are obtained for Intervention and Outcome Measure sentences. Conclusion: Results indicate that some of the methodological elements of RCTs are identifiable at the sentence level in both structured and unstructured abstract reports. This is promising in that sentences labeled automatically could potentially form concise summaries, assist in information retrieval and finer-grained extraction.","PeriodicalId":237893,"journal":{"name":"Les concepts en sciences infirmières","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Les concepts en sciences infirmières","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3917/arsi.forma.2012.01.0143","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: The practice of evidence-based medicine (EBM) requires clinicians to integrate their expertise with the latest scientific research. But this is becoming increasingly difficult with the growing numbers of published articles. There is a clear need for better tools to improve clinician's ability to search the primary literature. Randomized clinical trials (RCTs) are the most reliable source of evidence documenting the efficacy of treatment options. This paper describes the retrieval of key sentences from abstracts of RCTs as a step towards helping users find relevant facts about the experimental design of clinical studies. Method: Using Conditional Random Fields (CRFs), a popular and successful method for natural language processing problems, sentences referring to Intervention, Participants and Outcome Measures are automatically categorized. This is done by extending a previous approach for labeling sentences in an abstract for general categories associated with scientific argumentation or rhetorical roles: Aim, Method, Results and Conclusion. Methods are tested on several corpora of RCT abstracts. First structured abstracts with headings specifically indicating Intervention , Participant and Outcome Measures are used. Also a manually annotated corpus of structured and unstructured abstracts is prepared for testing a classifier that identifies sentences belonging to each category. Results: Using CRFs, sentences can be labeled for the four rhetorical roles with F -scores from 0.93–0.98. This outperforms the use of Support Vector Machines. Furthermore, sentences can be automatically labeled for Intervention , Participant and Outcome Measures , in unstructured and structured abstracts where the section headings do not specifically indicate these three topics. F - scores of up to 0.83 and 0.84 are obtained for Intervention and Outcome Measure sentences. Conclusion: Results indicate that some of the methodological elements of RCTs are identifiable at the sentence level in both structured and unstructured abstract reports. This is promising in that sentences labeled automatically could potentially form concise summaries, assist in information retrieval and finer-grained extraction.
决定。
背景:循证医学(EBM)的实践要求临床医生将他们的专业知识与最新的科学研究相结合。但随着发表的文章越来越多,这变得越来越困难。显然需要更好的工具来提高临床医生搜索原始文献的能力。随机临床试验(rct)是证明治疗方案有效性的最可靠的证据来源。本文描述了从随机对照试验摘要中检索关键句子,作为帮助用户找到有关临床研究实验设计的相关事实的一步。方法:使用条件随机场(Conditional Random Fields, CRFs)——一种流行且成功的自然语言处理问题的方法,对涉及干预、参与者和结果测量的句子进行自动分类。这是通过扩展之前的方法来完成的,在摘要中为与科学论证或修辞角色相关的一般类别标记句子:目的,方法,结果和结论。在RCT摘要的几个语料库上对方法进行了测试。首先采用结构化摘要,标题明确指出干预措施、参与者和结果措施。此外,还准备了一个人工注释的结构化和非结构化摘要语料库,用于测试识别属于每个类别的句子的分类器。结果:使用CRFs可以标记出四个修辞角色的句子,F值在0.93-0.98之间。这优于使用支持向量机。此外,在章节标题没有明确指出这三个主题的非结构化和结构化摘要中,句子可以自动标记为干预、参与者和结果测量。干预和结果测量句子的F得分分别高达0.83和0.84。结论:结果表明,在结构化和非结构化摘要报告中,rct的一些方法学元素在句子水平上都是可识别的。这是有希望的,因为自动标记的句子可能会形成简洁的摘要,有助于信息检索和更细粒度的提取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信