Pure tone audiogram classification using deep learning techniques

IF 1.7 4区 医学 Q2 OTORHINOLARYNGOLOGY
Zhiyong Dou, Yingqiang Li, Dongzhou Deng, Yunxue Zhang, Anran Pang, Cong Fang, Xiang Bai, Dan Bing
{"title":"Pure tone audiogram classification using deep learning techniques","authors":"Zhiyong Dou,&nbsp;Yingqiang Li,&nbsp;Dongzhou Deng,&nbsp;Yunxue Zhang,&nbsp;Anran Pang,&nbsp;Cong Fang,&nbsp;Xiang Bai,&nbsp;Dan Bing","doi":"10.1111/coa.14170","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Objective</h3>\n \n <p>Pure tone audiometry has played a critical role in audiology as the initial diagnostic tool, offering vital insights for subsequent analyses. This study aims to develop a robust deep learning framework capable of accurately classifying audiograms across various commonly encountered tasks.</p>\n </section>\n \n <section>\n \n <h3> Design, Setting, and Participants</h3>\n \n <p>This single-centre retrospective study was conducted in accordance with the STROBE guidelines. A total of 12 518 audiograms were collected from 6259 patients aged between 4 and 96 years, who underwent pure tone audiometry testing between February 2018 and April 2022 at Tongji Hospital, Tongji Medical College, Wuhan, China. Three experienced audiologists independently annotated the audiograms, labelling the hearing loss in degrees, types and configurations of each audiogram.</p>\n </section>\n \n <section>\n \n <h3> Main Outcome Measures</h3>\n \n <p>A deep learning framework was developed and utilised to classify audiograms across three tasks: determining the degrees of hearing loss, identifying the types of hearing loss, and categorising the configurations of audiograms. The classification performance was evaluated using four commonly used metrics: accuracy, precision, recall and F1-score.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>The deep learning method consistently outperformed alternative methods, including K-Nearest Neighbors, ExtraTrees, Random Forest, XGBoost, LightGBM, CatBoost and FastAI Net, across all three tasks. It achieved the highest accuracy rates, ranging from 96.75% to 99.85%. Precision values fell within the range of 88.93% to 98.41%, while recall values spanned from 89.25% to 98.38%. The F1-score also exhibited strong performance, ranging from 88.99% to 98.39%.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>This study demonstrated that a deep learning approach could accurately classify audiograms into their respective categories and could contribute to assisting doctors, particularly those lacking audiology expertise or experience, in better interpreting pure tone audiograms, enhancing diagnostic accuracy in primary care settings, and reducing the misdiagnosis rate of hearing conditions. In scenarios involving large-scale audiological data, the automated classification system could be used as a research tool to efficiently provide a comprehensive overview and statistical analysis. In the era of mobile audiometry, our deep learning framework can also help patients quickly and reliably understand their self-tested audiograms, potentially encouraging timely consultations with audiologists for further evaluation and intervention.</p>\n </section>\n </div>","PeriodicalId":10431,"journal":{"name":"Clinical Otolaryngology","volume":"49 5","pages":"595-603"},"PeriodicalIF":1.7000,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Otolaryngology","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/coa.14170","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OTORHINOLARYNGOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Objective

Pure tone audiometry has played a critical role in audiology as the initial diagnostic tool, offering vital insights for subsequent analyses. This study aims to develop a robust deep learning framework capable of accurately classifying audiograms across various commonly encountered tasks.

Design, Setting, and Participants

This single-centre retrospective study was conducted in accordance with the STROBE guidelines. A total of 12 518 audiograms were collected from 6259 patients aged between 4 and 96 years, who underwent pure tone audiometry testing between February 2018 and April 2022 at Tongji Hospital, Tongji Medical College, Wuhan, China. Three experienced audiologists independently annotated the audiograms, labelling the hearing loss in degrees, types and configurations of each audiogram.

Main Outcome Measures

A deep learning framework was developed and utilised to classify audiograms across three tasks: determining the degrees of hearing loss, identifying the types of hearing loss, and categorising the configurations of audiograms. The classification performance was evaluated using four commonly used metrics: accuracy, precision, recall and F1-score.

Results

The deep learning method consistently outperformed alternative methods, including K-Nearest Neighbors, ExtraTrees, Random Forest, XGBoost, LightGBM, CatBoost and FastAI Net, across all three tasks. It achieved the highest accuracy rates, ranging from 96.75% to 99.85%. Precision values fell within the range of 88.93% to 98.41%, while recall values spanned from 89.25% to 98.38%. The F1-score also exhibited strong performance, ranging from 88.99% to 98.39%.

Conclusions

This study demonstrated that a deep learning approach could accurately classify audiograms into their respective categories and could contribute to assisting doctors, particularly those lacking audiology expertise or experience, in better interpreting pure tone audiograms, enhancing diagnostic accuracy in primary care settings, and reducing the misdiagnosis rate of hearing conditions. In scenarios involving large-scale audiological data, the automated classification system could be used as a research tool to efficiently provide a comprehensive overview and statistical analysis. In the era of mobile audiometry, our deep learning framework can also help patients quickly and reliably understand their self-tested audiograms, potentially encouraging timely consultations with audiologists for further evaluation and intervention.

利用深度学习技术进行纯音听力图分类
目的:纯音测听作为最初的诊断工具,在听力学中发挥了至关重要的作用,为后续分析提供了重要的见解。本研究旨在开发一个强大的深度学习框架,能够对各种常见任务的听力图进行准确分类:这项单中心回顾性研究根据 STROBE 指南进行。共收集了 6259 名年龄在 4 岁至 96 岁之间的患者的 12 518 张听力图,这些患者于 2018 年 2 月至 2022 年 4 月期间在中国武汉同济医学院附属同济医院接受了纯音测听测试。三位经验丰富的听力学家对听力图进行了独立注释,标注了每张听力图的听力损失程度、类型和配置:开发并利用深度学习框架对听力图进行分类,包括三项任务:确定听力损失程度、识别听力损失类型以及对听力图的配置进行分类。使用四个常用指标对分类性能进行了评估:准确率、精确度、召回率和 F1 分数:在所有三个任务中,深度学习方法的表现始终优于其他方法,包括 K-近邻、ExtraTrees、随机森林、XGBoost、LightGBM、CatBoost 和 FastAI Net。它的准确率最高,从 96.75% 到 99.85%。精确率在 88.93% 到 98.41% 之间,召回率在 89.25% 到 98.38% 之间。F1 分数也表现出强劲的性能,在 88.99% 到 98.39% 之间:这项研究表明,深度学习方法可以准确地将听力图分类,有助于帮助医生(尤其是缺乏听力学专业知识或经验的医生)更好地解释纯音听力图,提高初级医疗机构的诊断准确性,降低听力疾病的误诊率。在涉及大规模听力数据的情况下,自动分类系统可用作研究工具,有效地提供全面概述和统计分析。在移动测听时代,我们的深度学习框架还能帮助患者快速、可靠地理解他们的自测听力图,从而有可能鼓励他们及时向听力学家咨询,以便进行进一步的评估和干预。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Clinical Otolaryngology
Clinical Otolaryngology 医学-耳鼻喉科学
CiteScore
4.00
自引率
4.80%
发文量
106
审稿时长
>12 weeks
期刊介绍: Clinical Otolaryngology is a bimonthly journal devoted to clinically-oriented research papers of the highest scientific standards dealing with: current otorhinolaryngological practice audiology, otology, balance, rhinology, larynx, voice and paediatric ORL head and neck oncology head and neck plastic and reconstructive surgery continuing medical education and ORL training The emphasis is on high quality new work in the clinical field and on fresh, original research. Each issue begins with an editorial expressing the personal opinions of an individual with a particular knowledge of a chosen subject. The main body of each issue is then devoted to original papers carrying important results for those working in the field. In addition, topical review articles are published discussing a particular subject in depth, including not only the opinions of the author but also any controversies surrounding the subject. • Negative/null results In order for research to advance, negative results, which often make a valuable contribution to the field, should be published. However, articles containing negative or null results are frequently not considered for publication or rejected by journals. We welcome papers of this kind, where appropriate and valid power calculations are included that give confidence that a negative result can be relied upon.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信