标记质量和视网膜OCT扫描病理特征的评分者间可靠性:一种定制的注释软件方法。

IF 2.6 3区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES
PLoS ONE Pub Date : 2024-12-18 eCollection Date: 2024-01-01 DOI:10.1371/journal.pone.0314707
Katherine Du, Stavan Shah, Sandeep Chandra Bollepalli, Mohammed Nasar Ibrahim, Adarsh Gadari, Shan Sutharahan, José-Alain Sahel, Jay Chhablani, Kiran Kumar Vupparaboina
{"title":"标记质量和视网膜OCT扫描病理特征的评分者间可靠性:一种定制的注释软件方法。","authors":"Katherine Du, Stavan Shah, Sandeep Chandra Bollepalli, Mohammed Nasar Ibrahim, Adarsh Gadari, Shan Sutharahan, José-Alain Sahel, Jay Chhablani, Kiran Kumar Vupparaboina","doi":"10.1371/journal.pone.0314707","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Various imaging features on optical coherence tomography (OCT) are crucial for identifying and defining disease progression. Establishing a consensus on these imaging features is essential, particularly for training deep learning models for disease classification. This study aims to analyze the inter-rater reliability in labeling the quality and common imaging signatures of retinal OCT scans.</p><p><strong>Methods: </strong>500 OCT scans obtained from CIRRUS HD-OCT 5000 devices were displayed at 512x1024x128 resolution on a customizable, in-house annotation software. Each patient's eye was represented by 16 random scans. Two masked reviewers independently labeled the quality and specific pathological features of each scan. Evaluated features included overall image quality, presence of fovea, and disease signatures including subretinal fluid (SRF), intraretinal fluid (IRF), drusen, pigment epithelial detachment (PED), and hyperreflective material. The raw percentage agreement and Cohen's kappa (κ) coefficient were used to evaluate concurrence between the two sets of labels.</p><p><strong>Results: </strong>Our analysis revealed κ = 0.60 for the inter-rater reliability of overall scan quality, indicating substantial agreement. In contrast, there was slight agreement in determining the cause of poor image quality (κ = 0.18). The binary determination of presence and absence of retinal disease signatures showed almost complete agreement between reviewers (κ = 0.85). Specific retinal pathologies, such as the foveal location of the scan (0.78), IRF (0.63), drusen (0.73), and PED (0.87), exhibited substantial concordance. However, less agreement was found in identifying SRF (0.52), hyperreflective dots (0.41), and hyperreflective foci (0.33).</p><p><strong>Conclusions: </strong>Our study demonstrates significant inter-rater reliability in labeling the quality and retinal pathologies on OCT scans. While some features show stronger agreement than others, these standardized labels can be utilized to create automated machine learning tools for diagnosing retinal diseases and capturing valuable pathological features in each scan. This standardization will aid in the consistency of medical diagnoses and enhance the accessibility of OCT diagnostic tools.</p>","PeriodicalId":20189,"journal":{"name":"PLoS ONE","volume":"19 12","pages":"e0314707"},"PeriodicalIF":2.6000,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11654994/pdf/","citationCount":"0","resultStr":"{\"title\":\"Inter-rater reliability in labeling quality and pathological features of retinal OCT scans: A customized annotation software approach.\",\"authors\":\"Katherine Du, Stavan Shah, Sandeep Chandra Bollepalli, Mohammed Nasar Ibrahim, Adarsh Gadari, Shan Sutharahan, José-Alain Sahel, Jay Chhablani, Kiran Kumar Vupparaboina\",\"doi\":\"10.1371/journal.pone.0314707\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>Various imaging features on optical coherence tomography (OCT) are crucial for identifying and defining disease progression. Establishing a consensus on these imaging features is essential, particularly for training deep learning models for disease classification. This study aims to analyze the inter-rater reliability in labeling the quality and common imaging signatures of retinal OCT scans.</p><p><strong>Methods: </strong>500 OCT scans obtained from CIRRUS HD-OCT 5000 devices were displayed at 512x1024x128 resolution on a customizable, in-house annotation software. Each patient's eye was represented by 16 random scans. Two masked reviewers independently labeled the quality and specific pathological features of each scan. Evaluated features included overall image quality, presence of fovea, and disease signatures including subretinal fluid (SRF), intraretinal fluid (IRF), drusen, pigment epithelial detachment (PED), and hyperreflective material. The raw percentage agreement and Cohen's kappa (κ) coefficient were used to evaluate concurrence between the two sets of labels.</p><p><strong>Results: </strong>Our analysis revealed κ = 0.60 for the inter-rater reliability of overall scan quality, indicating substantial agreement. In contrast, there was slight agreement in determining the cause of poor image quality (κ = 0.18). The binary determination of presence and absence of retinal disease signatures showed almost complete agreement between reviewers (κ = 0.85). Specific retinal pathologies, such as the foveal location of the scan (0.78), IRF (0.63), drusen (0.73), and PED (0.87), exhibited substantial concordance. However, less agreement was found in identifying SRF (0.52), hyperreflective dots (0.41), and hyperreflective foci (0.33).</p><p><strong>Conclusions: </strong>Our study demonstrates significant inter-rater reliability in labeling the quality and retinal pathologies on OCT scans. While some features show stronger agreement than others, these standardized labels can be utilized to create automated machine learning tools for diagnosing retinal diseases and capturing valuable pathological features in each scan. This standardization will aid in the consistency of medical diagnoses and enhance the accessibility of OCT diagnostic tools.</p>\",\"PeriodicalId\":20189,\"journal\":{\"name\":\"PLoS ONE\",\"volume\":\"19 12\",\"pages\":\"e0314707\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-12-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11654994/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PLoS ONE\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1371/journal.pone.0314707\",\"RegionNum\":3,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS ONE","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1371/journal.pone.0314707","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

目的:光学相干断层扫描(OCT)的各种成像特征对于识别和定义疾病进展至关重要。在这些成像特征上建立共识是至关重要的,特别是对于训练用于疾病分类的深度学习模型。本研究旨在分析视网膜OCT扫描质量标记和常见成像特征的评级间可靠性。方法:从CIRRUS HD-OCT 5000设备获得的500张OCT扫描图在可定制的内部注释软件上以512x1024x128分辨率显示。每个病人的眼睛都被随机扫描了16次。两名蒙面审查员独立标记每次扫描的质量和特定病理特征。评估的特征包括整体图像质量、中央凹的存在和疾病特征,包括视网膜下液(SRF)、视网膜内液(IRF)、水肿、色素上皮脱离(PED)和高反射物质。使用原始一致性百分比和Cohen’s kappa (κ)系数来评估两组标签之间的并发性。结果:我们的分析显示,总体扫描质量的评分间可靠性κ = 0.60,表明基本一致。相比之下,在确定图像质量差的原因方面有轻微的一致(κ = 0.18)。对视网膜疾病特征的存在和不存在的二元测定显示评论者之间几乎完全一致(κ = 0.85)。特定的视网膜病变,如扫描的中央凹位置(0.78),IRF (0.63), drusen(0.73)和PED(0.87),显示出实质性的一致性。然而,在识别SRF(0.52)、高反射点(0.41)和高反射焦点(0.33)方面的一致性较差。结论:我们的研究在OCT扫描的质量和视网膜病理标记上显示了显著的评分间可靠性。虽然一些特征显示出比其他特征更强的一致性,但这些标准化标签可用于创建自动机器学习工具,用于诊断视网膜疾病并在每次扫描中捕获有价值的病理特征。这种标准化将有助于医疗诊断的一致性,并提高OCT诊断工具的可及性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Inter-rater reliability in labeling quality and pathological features of retinal OCT scans: A customized annotation software approach.

Inter-rater reliability in labeling quality and pathological features of retinal OCT scans: A customized annotation software approach.

Inter-rater reliability in labeling quality and pathological features of retinal OCT scans: A customized annotation software approach.

Inter-rater reliability in labeling quality and pathological features of retinal OCT scans: A customized annotation software approach.

Objectives: Various imaging features on optical coherence tomography (OCT) are crucial for identifying and defining disease progression. Establishing a consensus on these imaging features is essential, particularly for training deep learning models for disease classification. This study aims to analyze the inter-rater reliability in labeling the quality and common imaging signatures of retinal OCT scans.

Methods: 500 OCT scans obtained from CIRRUS HD-OCT 5000 devices were displayed at 512x1024x128 resolution on a customizable, in-house annotation software. Each patient's eye was represented by 16 random scans. Two masked reviewers independently labeled the quality and specific pathological features of each scan. Evaluated features included overall image quality, presence of fovea, and disease signatures including subretinal fluid (SRF), intraretinal fluid (IRF), drusen, pigment epithelial detachment (PED), and hyperreflective material. The raw percentage agreement and Cohen's kappa (κ) coefficient were used to evaluate concurrence between the two sets of labels.

Results: Our analysis revealed κ = 0.60 for the inter-rater reliability of overall scan quality, indicating substantial agreement. In contrast, there was slight agreement in determining the cause of poor image quality (κ = 0.18). The binary determination of presence and absence of retinal disease signatures showed almost complete agreement between reviewers (κ = 0.85). Specific retinal pathologies, such as the foveal location of the scan (0.78), IRF (0.63), drusen (0.73), and PED (0.87), exhibited substantial concordance. However, less agreement was found in identifying SRF (0.52), hyperreflective dots (0.41), and hyperreflective foci (0.33).

Conclusions: Our study demonstrates significant inter-rater reliability in labeling the quality and retinal pathologies on OCT scans. While some features show stronger agreement than others, these standardized labels can be utilized to create automated machine learning tools for diagnosing retinal diseases and capturing valuable pathological features in each scan. This standardization will aid in the consistency of medical diagnoses and enhance the accessibility of OCT diagnostic tools.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
PLoS ONE
PLoS ONE 生物-生物学
CiteScore
6.20
自引率
5.40%
发文量
14242
审稿时长
3.7 months
期刊介绍: PLOS ONE is an international, peer-reviewed, open-access, online publication. PLOS ONE welcomes reports on primary research from any scientific discipline. It provides: * Open-access—freely accessible online, authors retain copyright * Fast publication times * Peer review by expert, practicing researchers * Post-publication tools to indicate quality and impact * Community-based dialogue on articles * Worldwide media coverage
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信