Manual Evaluation of Record Linkage Algorithm Performance in Four Real-World Datasets.

IF 2.1 2区 医学 Q4 MEDICAL INFORMATICS
Applied Clinical Informatics Pub Date : 2024-05-01 Epub Date: 2024-03-20 DOI:10.1055/a-2291-1391
Agrayan K Gupta, Huiping Xu, Xiaochun Li, Joshua R Vest, Shaun J Grannis
{"title":"Manual Evaluation of Record Linkage Algorithm Performance in Four Real-World Datasets.","authors":"Agrayan K Gupta, Huiping Xu, Xiaochun Li, Joshua R Vest, Shaun J Grannis","doi":"10.1055/a-2291-1391","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong> Patient data are fragmented across multiple repositories, yielding suboptimal and costly care. Record linkage algorithms are widely accepted solutions for improving completeness of patient records. However, studies often fail to fully describe their linkage techniques. Further, while many frameworks evaluate record linkage methods, few focus on producing gold standard datasets. This highlights a need to assess these frameworks and their real-world performance. We use real-world datasets and expand upon previous frameworks to evaluate a consistent approach to the manual review of gold standard datasets and measure its impact on algorithm performance.</p><p><strong>Methods: </strong> We applied the framework, which includes elements for data description, reviewer training and adjudication, and software and reviewer descriptions, to four datasets. Record pairs were formed and between 15,000 and 16,500 records were randomly sampled from these pairs. After training, two reviewers determined match status for each record pair. If reviewers disagreed, a third reviewer was used for final adjudication.</p><p><strong>Results: </strong> Between the four datasets, the percent discordant rate ranged from 1.8 to 13.6%. While reviewers' discordance rate typically ranged between 1 and 5%, one exhibited a 59% discordance rate, showing the importance of the third reviewer. The original analysis was compared with three sensitivity analyses. The original analysis most often exhibited the highest predictive values compared with the sensitivity analyses.</p><p><strong>Conclusion: </strong> Reviewers vary in their assessment of a gold standard, which can lead to variances in estimates for matching performance. Our analysis demonstrates how a multireviewer process can be applied to create gold standards, identify reviewer discrepancies, and evaluate algorithm performance.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":null,"pages":null},"PeriodicalIF":2.1000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11290950/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Clinical Informatics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1055/a-2291-1391","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/20 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives:  Patient data are fragmented across multiple repositories, yielding suboptimal and costly care. Record linkage algorithms are widely accepted solutions for improving completeness of patient records. However, studies often fail to fully describe their linkage techniques. Further, while many frameworks evaluate record linkage methods, few focus on producing gold standard datasets. This highlights a need to assess these frameworks and their real-world performance. We use real-world datasets and expand upon previous frameworks to evaluate a consistent approach to the manual review of gold standard datasets and measure its impact on algorithm performance.

Methods:  We applied the framework, which includes elements for data description, reviewer training and adjudication, and software and reviewer descriptions, to four datasets. Record pairs were formed and between 15,000 and 16,500 records were randomly sampled from these pairs. After training, two reviewers determined match status for each record pair. If reviewers disagreed, a third reviewer was used for final adjudication.

Results:  Between the four datasets, the percent discordant rate ranged from 1.8 to 13.6%. While reviewers' discordance rate typically ranged between 1 and 5%, one exhibited a 59% discordance rate, showing the importance of the third reviewer. The original analysis was compared with three sensitivity analyses. The original analysis most often exhibited the highest predictive values compared with the sensitivity analyses.

Conclusion:  Reviewers vary in their assessment of a gold standard, which can lead to variances in estimates for matching performance. Our analysis demonstrates how a multireviewer process can be applied to create gold standards, identify reviewer discrepancies, and evaluate algorithm performance.

在四个真实数据集中对记录链接算法性能进行人工评估。
背景:患者数据分散在多个存储库中,导致护理效果不佳且成本高昂。记录链接算法是广为接受的提高患者记录完整性的解决方案。然而,相关研究往往未能充分描述其链接技术。此外,虽然许多框架都对记录关联方法进行了评估,但很少有框架关注黄金标准数据集的生成。这就凸显了对这些框架及其实际性能进行评估的必要性:我们使用真实世界的数据集,并对之前的框架进行扩展,以评估人工审核金标准数据集的一致方法,并衡量其对算法性能的影响:我们将该框架应用于四个数据集,其中包括数据描述、审核员培训和裁定以及软件和审核员描述等要素。形成记录对,并从这些记录对中随机抽取 15000 条记录。经过培训后,由两名审核员确定每个记录对的匹配状态。如果审核员意见不一致,则由第三位审核员进行最终裁定:结果:四个数据集之间的不一致率为 1.8%-13.6%。虽然审稿人的不一致率通常在 1%-5%之间,但有一个数据集的不一致率高达 59%,这说明了第三位审稿人的重要性。原始分析与三项敏感性分析进行了比较。与敏感性分析相比,原始分析的预测值最高:结论:审稿人对金标准的评估各不相同,这可能会导致匹配性能的估计值存在差异。我们的分析展示了如何应用多评审员流程来创建黄金标准、识别评审员的差异以及评估算法性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Applied Clinical Informatics
Applied Clinical Informatics MEDICAL INFORMATICS-
CiteScore
4.60
自引率
24.10%
发文量
132
期刊介绍: ACI is the third Schattauer journal dealing with biomedical and health informatics. It perfectly complements our other journals Öffnet internen Link im aktuellen FensterMethods of Information in Medicine and the Öffnet internen Link im aktuellen FensterYearbook of Medical Informatics. The Yearbook of Medical Informatics being the “Milestone” or state-of-the-art journal and Methods of Information in Medicine being the “Science and Research” journal of IMIA, ACI intends to be the “Practical” journal of IMIA.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信