Pure tone audiometry has played a critical role in audiology as the initial diagnostic tool, offering vital insights for subsequent analyses. This study aims to develop a robust deep learning framework capable of accurately classifying audiograms across various commonly encountered tasks.
This single-centre retrospective study was conducted in accordance with the STROBE guidelines. A total of 12 518 audiograms were collected from 6259 patients aged between 4 and 96 years, who underwent pure tone audiometry testing between February 2018 and April 2022 at Tongji Hospital, Tongji Medical College, Wuhan, China. Three experienced audiologists independently annotated the audiograms, labelling the hearing loss in degrees, types and configurations of each audiogram.
A deep learning framework was developed and utilised to classify audiograms across three tasks: determining the degrees of hearing loss, identifying the types of hearing loss, and categorising the configurations of audiograms. The classification performance was evaluated using four commonly used metrics: accuracy, precision, recall and F1-score.
The deep learning method consistently outperformed alternative methods, including K-Nearest Neighbors, ExtraTrees, Random Forest, XGBoost, LightGBM, CatBoost and FastAI Net, across all three tasks. It achieved the highest accuracy rates, ranging from 96.75% to 99.85%. Precision values fell within the range of 88.93% to 98.41%, while recall values spanned from 89.25% to 98.38%. The F1-score also exhibited strong performance, ranging from 88.99% to 98.39%.
This study demonstrated that a deep learning approach could accurately classify audiograms into their respective categories and could contribute to assisting doctors, particularly those lacking audiology expertise or experience, in better interpreting pure tone audiograms, enhancing diagnostic accuracy in primary care settings, and reducing the misdiagnosis rate of hearing conditions. In scenarios involving large-scale audiological data, the automated classification system could be used as a research tool to efficiently provide a comprehensive overview and statistical analysis. In the era of mobile audiometry, our deep learning framework can also help patients quickly and reliably understand their self-tested audiograms, potentially encouraging timely consultations with audiologists for further evaluation and intervention.