Hikaru Kurosawa, Natalie J Won, Jack B Wunder, Sujit Patil, Mandolin Bartling, Esmat Najjar, Sharon Tzelnick, Brian C Wilson, Jonathan C Irish, Michael J Daly
{"title":"基于深度学习的荧光成像在临床前模型中用于口腔癌边缘分类。","authors":"Hikaru Kurosawa, Natalie J Won, Jack B Wunder, Sujit Patil, Mandolin Bartling, Esmat Najjar, Sharon Tzelnick, Brian C Wilson, Jonathan C Irish, Michael J Daly","doi":"10.1117/1.JBO.30.S3.S34109","DOIUrl":null,"url":null,"abstract":"<p><strong>Significance: </strong>Oral cancer surgery demands precise margin delineation to ensure complete tumor resection (healthy tissue margin <math><mrow><mo>></mo> <mn>5</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> ) while preserving postoperative functionality. Inadequate margins most frequently occur at the deep surgical margins, where tumors are located beneath the tissue surface; however, current fluorescent optical imaging systems are limited by their inability to quantify subsurface structures. Combining structured light techniques with deep learning may enable intraoperative margin assessment of 3D surgical specimens.</p><p><strong>Aim: </strong>A deep learning (DL)-enabled spatial frequency domain imaging (SFDI) system is investigated to provide subsurface depth quantification of fluorescent inclusions.</p><p><strong>Approach: </strong>A diffusion theory-based numerical simulation of SFDI was used to generate synthetic images for DL training. ResNet and U-Net convolutional neural networks were developed to predict margin distance (subsurface depth) and fluorophore concentration from fluorescence images and optical property maps. Validation was conducted using <i>in silico</i> SFDI images of composite spherical harmonics, as well as simulated and phantom datasets of patient-derived tongue tumor shapes. Further testing was done in <i>ex vivo</i> animal tissue with fluorescent inclusions.</p><p><strong>Results: </strong>For oral cancer optical properties, the U-Net DL model predicted the overall depth, concentration, and closest depth with errors of <math><mrow><mn>1.43</mn> <mo>±</mo> <mn>1.84</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> , <math><mrow><mn>2.26</mn> <mo>±</mo> <mn>1.63</mn> <mtext> </mtext> <mi>μ</mi> <mi>g</mi> <mo>/</mo> <mi>ml</mi></mrow> </math> , and <math><mrow><mn>0.33</mn> <mo>±</mo> <mn>0.31</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> , respectively, using <i>in silico</i> patient-derived tongue shapes with closest depths below 10 mm. In PpIX fluorescent phantoms of inclusion depths up to 8 mm, the closest subsurface depth was predicted with an error of <math><mrow><mn>0.57</mn> <mo>±</mo> <mn>0.38</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> . For <i>ex vivo</i> tissue, the closest distance to the fluorescent inclusions with depths up to 6 mm was predicted with an error of <math><mrow><mn>0.59</mn> <mo>±</mo> <mn>0.53</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> .</p><p><strong>Conclusions: </strong>A DL-enabled SFDI system trained with <i>in silico</i> images demonstrates promise in providing margin assessment of oral cancer tumors.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"30 Suppl 3","pages":"S34109"},"PeriodicalIF":2.9000,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12431673/pdf/","citationCount":"0","resultStr":"{\"title\":\"Deep learning-enabled fluorescence imaging for oral cancer margin classification in preclinical models.\",\"authors\":\"Hikaru Kurosawa, Natalie J Won, Jack B Wunder, Sujit Patil, Mandolin Bartling, Esmat Najjar, Sharon Tzelnick, Brian C Wilson, Jonathan C Irish, Michael J Daly\",\"doi\":\"10.1117/1.JBO.30.S3.S34109\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Significance: </strong>Oral cancer surgery demands precise margin delineation to ensure complete tumor resection (healthy tissue margin <math><mrow><mo>></mo> <mn>5</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> ) while preserving postoperative functionality. Inadequate margins most frequently occur at the deep surgical margins, where tumors are located beneath the tissue surface; however, current fluorescent optical imaging systems are limited by their inability to quantify subsurface structures. Combining structured light techniques with deep learning may enable intraoperative margin assessment of 3D surgical specimens.</p><p><strong>Aim: </strong>A deep learning (DL)-enabled spatial frequency domain imaging (SFDI) system is investigated to provide subsurface depth quantification of fluorescent inclusions.</p><p><strong>Approach: </strong>A diffusion theory-based numerical simulation of SFDI was used to generate synthetic images for DL training. ResNet and U-Net convolutional neural networks were developed to predict margin distance (subsurface depth) and fluorophore concentration from fluorescence images and optical property maps. Validation was conducted using <i>in silico</i> SFDI images of composite spherical harmonics, as well as simulated and phantom datasets of patient-derived tongue tumor shapes. Further testing was done in <i>ex vivo</i> animal tissue with fluorescent inclusions.</p><p><strong>Results: </strong>For oral cancer optical properties, the U-Net DL model predicted the overall depth, concentration, and closest depth with errors of <math><mrow><mn>1.43</mn> <mo>±</mo> <mn>1.84</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> , <math><mrow><mn>2.26</mn> <mo>±</mo> <mn>1.63</mn> <mtext> </mtext> <mi>μ</mi> <mi>g</mi> <mo>/</mo> <mi>ml</mi></mrow> </math> , and <math><mrow><mn>0.33</mn> <mo>±</mo> <mn>0.31</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> , respectively, using <i>in silico</i> patient-derived tongue shapes with closest depths below 10 mm. In PpIX fluorescent phantoms of inclusion depths up to 8 mm, the closest subsurface depth was predicted with an error of <math><mrow><mn>0.57</mn> <mo>±</mo> <mn>0.38</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> . For <i>ex vivo</i> tissue, the closest distance to the fluorescent inclusions with depths up to 6 mm was predicted with an error of <math><mrow><mn>0.59</mn> <mo>±</mo> <mn>0.53</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> .</p><p><strong>Conclusions: </strong>A DL-enabled SFDI system trained with <i>in silico</i> images demonstrates promise in providing margin assessment of oral cancer tumors.</p>\",\"PeriodicalId\":15264,\"journal\":{\"name\":\"Journal of Biomedical Optics\",\"volume\":\"30 Suppl 3\",\"pages\":\"S34109\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12431673/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Biomedical Optics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1117/1.JBO.30.S3.S34109\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/9/12 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"BIOCHEMICAL RESEARCH METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Biomedical Optics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.JBO.30.S3.S34109","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/9/12 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
Deep learning-enabled fluorescence imaging for oral cancer margin classification in preclinical models.
Significance: Oral cancer surgery demands precise margin delineation to ensure complete tumor resection (healthy tissue margin ) while preserving postoperative functionality. Inadequate margins most frequently occur at the deep surgical margins, where tumors are located beneath the tissue surface; however, current fluorescent optical imaging systems are limited by their inability to quantify subsurface structures. Combining structured light techniques with deep learning may enable intraoperative margin assessment of 3D surgical specimens.
Aim: A deep learning (DL)-enabled spatial frequency domain imaging (SFDI) system is investigated to provide subsurface depth quantification of fluorescent inclusions.
Approach: A diffusion theory-based numerical simulation of SFDI was used to generate synthetic images for DL training. ResNet and U-Net convolutional neural networks were developed to predict margin distance (subsurface depth) and fluorophore concentration from fluorescence images and optical property maps. Validation was conducted using in silico SFDI images of composite spherical harmonics, as well as simulated and phantom datasets of patient-derived tongue tumor shapes. Further testing was done in ex vivo animal tissue with fluorescent inclusions.
Results: For oral cancer optical properties, the U-Net DL model predicted the overall depth, concentration, and closest depth with errors of , , and , respectively, using in silico patient-derived tongue shapes with closest depths below 10 mm. In PpIX fluorescent phantoms of inclusion depths up to 8 mm, the closest subsurface depth was predicted with an error of . For ex vivo tissue, the closest distance to the fluorescent inclusions with depths up to 6 mm was predicted with an error of .
Conclusions: A DL-enabled SFDI system trained with in silico images demonstrates promise in providing margin assessment of oral cancer tumors.
期刊介绍:
The Journal of Biomedical Optics publishes peer-reviewed papers on the use of modern optical technology for improved health care and biomedical research.