Enhancing Oral Health Diagnostics With Hyperspectral Imaging and Computer Vision: Clinical Dataset Study.

IF 3.8 3区 医学 Q2 MEDICAL INFORMATICS
Paul Römer, Jean-Jacques Ponciano, Katharina Kloster, Fabia Siegberg, Bastian Plaß, Shankeeth Vinayahalingam, Bilal Al-Nawas, Peer W Kämmerer, Thomas Klauer, Daniel Thiem
{"title":"Enhancing Oral Health Diagnostics With Hyperspectral Imaging and Computer Vision: Clinical Dataset Study.","authors":"Paul Römer, Jean-Jacques Ponciano, Katharina Kloster, Fabia Siegberg, Bastian Plaß, Shankeeth Vinayahalingam, Bilal Al-Nawas, Peer W Kämmerer, Thomas Klauer, Daniel Thiem","doi":"10.2196/76148","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Diseases of the oral cavity, including oral squamous cell carcinoma, pose major challenges to health care worldwide due to their late diagnosis and complicated differentiation of oral tissues. The combination of endoscopic hyperspectral imaging (HSI) and deep learning (DL) models offers a promising approach to the demand for modern, noninvasive tissue diagnostics. This study presents a large-scale in vivo dataset designed to support DL-based segmentation and classification of healthy oral tissues.</p><p><strong>Objective: </strong>This study aimed to develop a comprehensive, annotated endoscopic HSI dataset of the oral cavity and to demonstrate automated, reliable differentiation of intraoral tissue structures by integrating endoscopic HSI with advanced machine learning methods.</p><p><strong>Methods: </strong>A total of 226 participants (166 women [73.5%], 60 men [26.5%], aged 24-87 years) were examined using an endoscopic HSI system, capturing spectral data in the range of 500 to 1000 nm. Oral structures in red, green, and blue and HSI scans were annotated using RectLabel Pro (by Ryo Kawamura). DeepLabv3 (Google Research) with a ResNet-50 backbone was adapted for endoscopic HSI segmentation. The model was trained for 50 epochs on 70% of the dataset, with 30% for evaluation. Performance metrics (precision, recall, and F1-score) confirmed its efficacy in distinguishing oral tissue types.</p><p><strong>Results: </strong>DeepLabv3 (ResNet-101) and U-Net (EfficientNet-B0/ResNet-50) achieved the highest overall F1-scores of 0.857 and 0.84, respectively, particularly excelling in segmenting the mucosa (0.915), retractor (0.94), tooth (0.90), and palate (0.90). Variability analysis confirmed high spectral diversity across tissue classes, supporting the dataset's complexity and authenticity for realistic clinical conditions.</p><p><strong>Conclusions: </strong>The presented dataset addresses a key gap in oral health imaging by developing and validating robust DL algorithms for endoscopic HSI data. It enables accurate classification of oral tissue and paves the way for future applications in individualized noninvasive pathological tissue analysis, early cancer detection, and intraoperative diagnostics of oral diseases.</p>","PeriodicalId":56334,"journal":{"name":"JMIR Medical Informatics","volume":"13 ","pages":"e76148"},"PeriodicalIF":3.8000,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12425605/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Medical Informatics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/76148","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Diseases of the oral cavity, including oral squamous cell carcinoma, pose major challenges to health care worldwide due to their late diagnosis and complicated differentiation of oral tissues. The combination of endoscopic hyperspectral imaging (HSI) and deep learning (DL) models offers a promising approach to the demand for modern, noninvasive tissue diagnostics. This study presents a large-scale in vivo dataset designed to support DL-based segmentation and classification of healthy oral tissues.

Objective: This study aimed to develop a comprehensive, annotated endoscopic HSI dataset of the oral cavity and to demonstrate automated, reliable differentiation of intraoral tissue structures by integrating endoscopic HSI with advanced machine learning methods.

Methods: A total of 226 participants (166 women [73.5%], 60 men [26.5%], aged 24-87 years) were examined using an endoscopic HSI system, capturing spectral data in the range of 500 to 1000 nm. Oral structures in red, green, and blue and HSI scans were annotated using RectLabel Pro (by Ryo Kawamura). DeepLabv3 (Google Research) with a ResNet-50 backbone was adapted for endoscopic HSI segmentation. The model was trained for 50 epochs on 70% of the dataset, with 30% for evaluation. Performance metrics (precision, recall, and F1-score) confirmed its efficacy in distinguishing oral tissue types.

Results: DeepLabv3 (ResNet-101) and U-Net (EfficientNet-B0/ResNet-50) achieved the highest overall F1-scores of 0.857 and 0.84, respectively, particularly excelling in segmenting the mucosa (0.915), retractor (0.94), tooth (0.90), and palate (0.90). Variability analysis confirmed high spectral diversity across tissue classes, supporting the dataset's complexity and authenticity for realistic clinical conditions.

Conclusions: The presented dataset addresses a key gap in oral health imaging by developing and validating robust DL algorithms for endoscopic HSI data. It enables accurate classification of oral tissue and paves the way for future applications in individualized noninvasive pathological tissue analysis, early cancer detection, and intraoperative diagnostics of oral diseases.

Abstract Image

Abstract Image

Abstract Image

利用高光谱成像和计算机视觉增强口腔健康诊断:临床数据集研究。
背景:口腔疾病,包括口腔鳞状细胞癌,由于其诊断较晚和口腔组织分化复杂,对全球卫生保健构成了重大挑战。内窥镜高光谱成像(HSI)和深度学习(DL)模型的结合为现代非侵入性组织诊断提供了一种有前途的方法。本研究提出了一个大规模的体内数据集,旨在支持基于dl的健康口腔组织分割和分类。目的:本研究旨在建立一个全面的、带注释的口腔内窥镜HSI数据集,并通过将内窥镜HSI与先进的机器学习方法相结合,展示口腔内组织结构的自动化、可靠分化。方法:共有226名参与者(166名女性[73.5%],60名男性[26.5%],年龄24-87岁)使用内窥镜HSI系统进行检查,捕获500至1000 nm范围内的光谱数据。使用RectLabel Pro (Ryo Kawamura)对红、绿、蓝和HSI扫描的口腔结构进行注释。采用ResNet-50骨架的DeepLabv3(谷歌Research)适用于内镜下HSI分割。该模型在70%的数据集上训练了50个epoch,其中30%用于评估。性能指标(准确率、召回率和f1评分)证实了其在区分口腔组织类型方面的有效性。结果:DeepLabv3 (ResNet-101)和U-Net (EfficientNet-B0/ResNet-50)的总f1得分最高,分别为0.857和0.84,尤其在分割粘膜(0.915)、牵开器(0.94)、牙齿(0.90)和腭(0.90)方面表现优异。变异性分析证实了跨组织类别的高光谱多样性,支持数据集的复杂性和现实临床条件的真实性。结论:该数据集通过开发和验证用于内窥镜HSI数据的鲁棒深度学习算法,解决了口腔健康成像的关键空白。它可以实现口腔组织的准确分类,并为未来在个体化无创病理组织分析、早期癌症检测和口腔疾病术中诊断方面的应用铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
JMIR Medical Informatics
JMIR Medical Informatics Medicine-Health Informatics
CiteScore
7.90
自引率
3.10%
发文量
173
审稿时长
12 weeks
期刊介绍: JMIR Medical Informatics (JMI, ISSN 2291-9694) is a top-rated, tier A journal which focuses on clinical informatics, big data in health and health care, decision support for health professionals, electronic health records, ehealth infrastructures and implementation. It has a focus on applied, translational research, with a broad readership including clinicians, CIOs, engineers, industry and health informatics professionals. Published by JMIR Publications, publisher of the Journal of Medical Internet Research (JMIR), the leading eHealth/mHealth journal (Impact Factor 2016: 5.175), JMIR Med Inform has a slightly different scope (emphasizing more on applications for clinicians and health professionals rather than consumers/citizens, which is the focus of JMIR), publishes even faster, and also allows papers which are more technical or more formative than what would be published in the Journal of Medical Internet Research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信