Journal of Biomedical Optics最新文献

筛选
英文 中文
Validation of multispectral imaging-based tissue oxygen saturation detecting system for wound healing recognition on open wounds. 验证基于多光谱成像的组织氧饱和度检测系统,用于识别开放性伤口的愈合情况。
IF 3 3区 医学
Journal of Biomedical Optics Pub Date : 2024-08-01 Epub Date: 2024-08-13 DOI: 10.1117/1.JBO.29.8.086004
Yi-Syuan Shin, Kuo-Shu Hung, Chung-Te Tsai, Meng-Hsuan Wu, Chih-Lung Lin, Yuan-Yu Hsueh
{"title":"Validation of multispectral imaging-based tissue oxygen saturation detecting system for wound healing recognition on open wounds.","authors":"Yi-Syuan Shin, Kuo-Shu Hung, Chung-Te Tsai, Meng-Hsuan Wu, Chih-Lung Lin, Yuan-Yu Hsueh","doi":"10.1117/1.JBO.29.8.086004","DOIUrl":"10.1117/1.JBO.29.8.086004","url":null,"abstract":"<p><strong>Significance: </strong>The multispectral imaging-based tissue oxygen saturation detecting (TOSD) system offers deeper penetration ( <math><mrow><mo>∼</mo> <mn>2</mn></mrow> </math> to 3 mm) and comprehensive tissue oxygen saturation ( <math> <mrow><msub><mi>StO</mi> <mn>2</mn></msub> </mrow> </math> ) assessment and recognizes the wound healing phase at a low cost and computational requirement. The potential for miniaturization and integration of TOSD into telemedicine platforms could revolutionize wound care in the challenging pandemic era.</p><p><strong>Aim: </strong>We aim to validate TOSD's application in detecting <math> <mrow><msub><mi>StO</mi> <mn>2</mn></msub> </mrow> </math> by comparing it with wound closure rates and laser speckle contrast imaging (LSCI), demonstrating TOSD's ability to recognize the wound healing process.</p><p><strong>Approach: </strong>Utilizing a murine model, we compared TOSD with digital photography and LSCI for comprehensive wound observation in five mice with 6-mm back wounds. Sequential biochemical analysis of wound discharge was investigated for the translational relevance of TOSD.</p><p><strong>Results: </strong>TOSD demonstrated constant signals on unwounded skin with differential changes on open wounds. Compared with LSCI, TOSD provides indicative recognition of the proliferative phase during wound healing, with a higher correlation coefficient to wound closure rate (TOSD: 0.58; LSCI: 0.44). <math> <mrow><msub><mi>StO</mi> <mn>2</mn></msub> </mrow> </math> detected by TOSD was further correlated with proliferative phase angiogenesis markers.</p><p><strong>Conclusions: </strong>Our findings suggest TOSD's enhanced utility in wound management protocols, evaluating clinical staging and therapeutic outcomes. By offering a noncontact, convenient monitoring tool, TOSD can be applied to telemedicine, aiming to advance wound care and regeneration, potentially improving patient outcomes and reducing healthcare costs associated with chronic wounds.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"086004"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11321076/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141975760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving diffuse optical tomography imaging quality using APU-Net: an attention-based physical U-Net model. 利用 APU-Net:基于注意力的物理 U-Net 模型提高漫反射光学断层成像质量。
IF 3 3区 医学
Journal of Biomedical Optics Pub Date : 2024-08-01 Epub Date: 2024-07-25 DOI: 10.1117/1.JBO.29.8.086001
Minghao Xue, Shuying Li, Quing Zhu
{"title":"Improving diffuse optical tomography imaging quality using APU-Net: an attention-based physical U-Net model.","authors":"Minghao Xue, Shuying Li, Quing Zhu","doi":"10.1117/1.JBO.29.8.086001","DOIUrl":"10.1117/1.JBO.29.8.086001","url":null,"abstract":"<p><strong>Significance: </strong>Traditional diffuse optical tomography (DOT) reconstructions are hampered by image artifacts arising from factors such as DOT sources being closer to shallow lesions, poor optode-tissue coupling, tissue heterogeneity, and large high-contrast lesions lacking information in deeper regions (known as shadowing effect). Addressing these challenges is crucial for improving the quality of DOT images and obtaining robust lesion diagnosis.</p><p><strong>Aim: </strong>We address the limitations of current DOT imaging reconstruction by introducing an attention-based U-Net (APU-Net) model to enhance the image quality of DOT reconstruction, ultimately improving lesion diagnostic accuracy.</p><p><strong>Approach: </strong>We designed an APU-Net model incorporating a contextual transformer attention module to enhance DOT reconstruction. The model was trained on simulation and phantom data, focusing on challenges such as artifact-induced distortions and lesion-shadowing effects. The model was then evaluated by the clinical data.</p><p><strong>Results: </strong>Transitioning from simulation and phantom data to clinical patients' data, our APU-Net model effectively reduced artifacts with an average artifact contrast decrease of 26.83% and improved image quality. In addition, statistical analyses revealed significant contrast improvements in depth profile with an average contrast increase of 20.28% and 45.31% for the second and third target layers, respectively. These results highlighted the efficacy of our approach in breast cancer diagnosis.</p><p><strong>Conclusions: </strong>The APU-Net model improves the image quality of DOT reconstruction by reducing DOT image artifacts and improving the target depth profile.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"086001"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11272096/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141788061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DermoGAN: multi-task cycle generative adversarial networks for unsupervised automatic cell identification on in-vivo reflectance confocal microscopy images of the human epidermis. DermoGAN:用于人体表皮体内反射共聚焦显微镜图像无监督自动细胞识别的多任务循环生成对抗网络。
IF 3 3区 医学
Journal of Biomedical Optics Pub Date : 2024-08-01 Epub Date: 2024-08-02 DOI: 10.1117/1.JBO.29.8.086003
Imane Lboukili, Georgios Stamatas, Xavier Descombes
{"title":"DermoGAN: multi-task cycle generative adversarial networks for unsupervised automatic cell identification on <i>in-vivo</i> reflectance confocal microscopy images of the human epidermis.","authors":"Imane Lboukili, Georgios Stamatas, Xavier Descombes","doi":"10.1117/1.JBO.29.8.086003","DOIUrl":"10.1117/1.JBO.29.8.086003","url":null,"abstract":"<p><strong>Significance: </strong>Accurate identification of epidermal cells on reflectance confocal microscopy (RCM) images is important in the study of epidermal architecture and topology of both healthy and diseased skin. However, analysis of these images is currently done manually and therefore time-consuming and subject to human error and inter-expert interpretation. It is also hindered by low image quality due to noise and heterogeneity.</p><p><strong>Aim: </strong>We aimed to design an automated pipeline for the analysis of the epidermal structure from RCM images.</p><p><strong>Approach: </strong>Two attempts have been made at automatically localizing epidermal cells, called keratinocytes, on RCM images: the first is based on a rotationally symmetric error function mask, and the second on cell morphological features. Here, we propose a dual-task network to automatically identify keratinocytes on RCM images. Each task consists of a cycle generative adversarial network. The first task aims to translate real RCM images into binary images, thus learning the noise and texture model of RCM images, whereas the second task maps Gabor-filtered RCM images into binary images, learning the epidermal structure visible on RCM images. The combination of the two tasks allows one task to constrict the solution space of the other, thus improving overall results. We refine our cell identification by applying the pre-trained StarDist algorithm to detect star-convex shapes, thus closing any incomplete membranes and separating neighboring cells.</p><p><strong>Results: </strong>The results are evaluated both on simulated data and manually annotated real RCM data. Accuracy is measured using recall and precision metrics, which is summarized as the <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score.</p><p><strong>Conclusions: </strong>We demonstrate that the proposed fully unsupervised method successfully identifies keratinocytes on RCM images of the epidermis, with an accuracy on par with experts' cell identification, is not constrained by limited available annotated data, and can be extended to images acquired using various imaging techniques without retraining.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"086003"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294601/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141889301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tutorial on phantoms for photoacoustic imaging applications. 光声成像应用模型教程。
IF 3 3区 医学
Journal of Biomedical Optics Pub Date : 2024-08-01 Epub Date: 2024-08-14 DOI: 10.1117/1.JBO.29.8.080801
Lina Hacker, James Joseph, Ledia Lilaj, Srirang Manohar, Aoife M Ivory, Ran Tao, Sarah E Bohndiek
{"title":"Tutorial on phantoms for photoacoustic imaging applications.","authors":"Lina Hacker, James Joseph, Ledia Lilaj, Srirang Manohar, Aoife M Ivory, Ran Tao, Sarah E Bohndiek","doi":"10.1117/1.JBO.29.8.080801","DOIUrl":"10.1117/1.JBO.29.8.080801","url":null,"abstract":"<p><strong>Significance: </strong>Photoacoustic imaging (PAI) is an emerging technology that holds high promise in a wide range of clinical applications, but standardized methods for system testing are lacking, impeding objective device performance evaluation, calibration, and inter-device comparisons. To address this shortfall, this tutorial offers readers structured guidance in developing tissue-mimicking phantoms for photoacoustic applications with potential extensions to certain acoustic and optical imaging applications.</p><p><strong>Aim: </strong>The tutorial review aims to summarize recommendations on phantom development for PAI applications to harmonize efforts in standardization and system calibration in the field.</p><p><strong>Approach: </strong>The International Photoacoustic Standardization Consortium has conducted a consensus exercise to define recommendations for the development of tissue-mimicking phantoms in PAI.</p><p><strong>Results: </strong>Recommendations on phantom development are summarized in seven defined steps, expanding from (1) general understanding of the imaging modality, definition of (2) relevant terminology and parameters and (3) phantom purposes, recommendation of (4) basic material properties, (5) material characterization methods, and (6) phantom design to (7) reproducibility efforts.</p><p><strong>Conclusions: </strong>The tutorial offers a comprehensive framework for the development of tissue-mimicking phantoms in PAI to streamline efforts in system testing and push forward the advancement and translation of the technology.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"080801"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11324153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141982358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convolutional neural network-based regression analysis to predict subnuclear chromatin organization from two-dimensional optical scattering signals. 基于卷积神经网络的回归分析,从二维光学散射信号预测核下染色质组织。
IF 3 3区 医学
Journal of Biomedical Optics Pub Date : 2024-08-01 Epub Date: 2024-08-28 DOI: 10.1117/1.JBO.29.8.080502
Yazdan Al-Kurdi, Cem Direkoǧlu, Meryem Erbilek, Dizem Arifler
{"title":"Convolutional neural network-based regression analysis to predict subnuclear chromatin organization from two-dimensional optical scattering signals.","authors":"Yazdan Al-Kurdi, Cem Direkoǧlu, Meryem Erbilek, Dizem Arifler","doi":"10.1117/1.JBO.29.8.080502","DOIUrl":"10.1117/1.JBO.29.8.080502","url":null,"abstract":"<p><strong>Significance: </strong>Azimuth-resolved optical scattering signals obtained from cell nuclei are sensitive to changes in their internal refractive index profile. These two-dimensional signals can therefore offer significant insights into chromatin organization.</p><p><strong>Aim: </strong>We aim to determine whether two-dimensional scattering signals can be used in an inverse scheme to extract the spatial correlation length <math> <mrow><msub><mi>ℓ</mi> <mi>c</mi></msub> </mrow> </math> and extent <math><mrow><mi>δ</mi> <mi>n</mi></mrow> </math> of subnuclear refractive index fluctuations to provide quantitative information on chromatin distribution.</p><p><strong>Approach: </strong>Since an analytical formulation that links azimuth-resolved signals to <math> <mrow><msub><mi>ℓ</mi> <mi>c</mi></msub> </mrow> </math> and <math><mrow><mi>δ</mi> <mi>n</mi></mrow> </math> is not feasible, we set out to assess the potential of machine learning to predict these parameters via a data-driven approach. We carry out a convolutional neural network (CNN)-based regression analysis on 198 numerically computed signals for nuclear models constructed with <math> <mrow><msub><mi>ℓ</mi> <mi>c</mi></msub> </mrow> </math> varying in steps of <math><mrow><mn>0.1</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> between 0.4 and <math><mrow><mn>1.0</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> , and <math><mrow><mi>δ</mi> <mi>n</mi></mrow> </math> varying in steps of 0.005 between 0.005 and 0.035. We quantify the performance of our analysis using a five-fold cross-validation technique.</p><p><strong>Results: </strong>The results show agreement between the true and predicted values for both <math> <mrow><msub><mi>ℓ</mi> <mi>c</mi></msub> </mrow> </math> and <math><mrow><mi>δ</mi> <mi>n</mi></mrow> </math> , with mean absolute percent errors of 8.5% and 13.5%, respectively. These errors are smaller than the minimum percent increment between successive values for respective parameters characterizing the constructed models and thus signify an extremely good prediction performance over the range of interest.</p><p><strong>Conclusions: </strong>Our results reveal that CNN-based regression can be a powerful approach for exploiting the information content of two-dimensional optical scattering signals and hence monitoring chromatin organization in a quantitative manner.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"080502"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11350520/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142107840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NerveTracker: a Python-based software toolkit for visualizing and tracking groups of nerve fibers in serial block-face microscopy with ultraviolet surface excitation images. NerveTracker:基于 Python 的软件工具包,用于在序列块面显微镜下通过紫外表面激发图像观察和跟踪神经纤维组。
IF 3 3区 医学
Journal of Biomedical Optics Pub Date : 2024-07-01 Epub Date: 2024-06-18 DOI: 10.1117/1.JBO.29.7.076501
Chaitanya Kolluru, Naomi Joseph, James Seckler, Farzad Fereidouni, Richard Levenson, Andrew Shoffstall, Michael Jenkins, David Wilson
{"title":"NerveTracker: a Python-based software toolkit for visualizing and tracking groups of nerve fibers in serial block-face microscopy with ultraviolet surface excitation images.","authors":"Chaitanya Kolluru, Naomi Joseph, James Seckler, Farzad Fereidouni, Richard Levenson, Andrew Shoffstall, Michael Jenkins, David Wilson","doi":"10.1117/1.JBO.29.7.076501","DOIUrl":"10.1117/1.JBO.29.7.076501","url":null,"abstract":"<p><strong>Significance: </strong>Information about the spatial organization of fibers within a nerve is crucial to our understanding of nerve anatomy and its response to neuromodulation therapies. A serial block-face microscopy method [three-dimensional microscopy with ultraviolet surface excitation (3D-MUSE)] has been developed to image nerves over extended depths <i>ex vivo</i>. To routinely visualize and track nerve fibers in these datasets, a dedicated and customizable software tool is required.</p><p><strong>Aim: </strong>Our objective was to develop custom software that includes image processing and visualization methods to perform microscopic tractography along the length of a peripheral nerve sample.</p><p><strong>Approach: </strong>We modified common computer vision algorithms (optic flow and structure tensor) to track groups of peripheral nerve fibers along the length of the nerve. Interactive streamline visualization and manual editing tools are provided. Optionally, deep learning segmentation of fascicles (fiber bundles) can be applied to constrain the tracts from inadvertently crossing into the epineurium. As an example, we performed tractography on vagus and tibial nerve datasets and assessed accuracy by comparing the resulting nerve tracts with segmentations of fascicles as they split and merge with each other in the nerve sample stack.</p><p><strong>Results: </strong>We found that a normalized Dice overlap ( <math> <mrow> <msub><mrow><mtext>Dice</mtext></mrow> <mrow><mtext>norm</mtext></mrow> </msub> </mrow> </math> ) metric had a mean value above 0.75 across several millimeters along the nerve. We also found that the tractograms were robust to changes in certain image properties (e.g., downsampling in-plane and out-of-plane), which resulted in only a 2% to 9% change to the mean <math> <mrow> <msub><mrow><mtext>Dice</mtext></mrow> <mrow><mtext>norm</mtext></mrow> </msub> </mrow> </math> values. In a vagus nerve sample, tractography allowed us to readily identify that subsets of fibers from four distinct fascicles merge into a single fascicle as we move <math><mrow><mo>∼</mo> <mn>5</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> along the nerve's length.</p><p><strong>Conclusions: </strong>Overall, we demonstrated the feasibility of performing automated microscopic tractography on 3D-MUSE datasets of peripheral nerves. The software should be applicable to other imaging approaches. The code is available at https://github.com/ckolluru/NerveTracker.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076501"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11188586/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141442766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of handheld spectrally encoded coherence tomography and reflectometry for point-of-care ophthalmic diagnostic imaging. 优化用于护理点眼科诊断成像的手持式光谱编码相干断层扫描和反射测量仪。
IF 3 3区 医学
Journal of Biomedical Optics Pub Date : 2024-07-01 Epub Date: 2024-07-24 DOI: 10.1117/1.JBO.29.7.076006
Jacob J Watson, Rachel Hecht, Yuankai K Tao
{"title":"Optimization of handheld spectrally encoded coherence tomography and reflectometry for point-of-care ophthalmic diagnostic imaging.","authors":"Jacob J Watson, Rachel Hecht, Yuankai K Tao","doi":"10.1117/1.JBO.29.7.076006","DOIUrl":"10.1117/1.JBO.29.7.076006","url":null,"abstract":"<p><strong>Significance: </strong>Handheld optical coherence tomography (HH-OCT) systems enable point-of-care ophthalmic imaging in bedridden, uncooperative, and pediatric patients. Handheld spectrally encoded coherence tomography and reflectometry (HH-SECTR) combines OCT and spectrally encoded reflectometry (SER) to address critical clinical challenges in HH-OCT imaging with real-time <i>en face</i> retinal aiming for OCT volume alignment and volumetric correction of motion artifacts that occur during HH-OCT imaging.</p><p><strong>Aim: </strong>We aim to enable robust clinical translation of HH-SECTR and improve clinical ergonomics during point-of-care OCT imaging for ophthalmic diagnostics.</p><p><strong>Approach: </strong>HH-SECTR is redesigned with (1) optimized SER optical imaging for <i>en face</i> retinal aiming and retinal tracking for motion correction, (2) a modular aluminum form factor for sustained alignment and probe stability for longitudinal clinical studies, and (3) one-handed photographer-ergonomic motorized focus adjustment.</p><p><strong>Results: </strong>We demonstrate an HH-SECTR imaging probe with micron-scale optical-optomechanical stability and use it for <i>in vivo</i> human retinal imaging and volumetric motion correction.</p><p><strong>Conclusions: </strong>This research will benefit the clinical translation of HH-SECTR for point-of-care ophthalmic diagnostics.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076006"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11267400/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141758977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-contact elasticity contrast imaging using photon counting. 利用光子计数进行非接触式弹性对比成像。
IF 3 3区 医学
Journal of Biomedical Optics Pub Date : 2024-07-01 Epub Date: 2024-07-10 DOI: 10.1117/1.JBO.29.7.076003
Zipei Zheng, Yong Meng Sua, Shenyu Zhu, Patrick Rehain, Yu-Ping Huang
{"title":"Non-contact elasticity contrast imaging using photon counting.","authors":"Zipei Zheng, Yong Meng Sua, Shenyu Zhu, Patrick Rehain, Yu-Ping Huang","doi":"10.1117/1.JBO.29.7.076003","DOIUrl":"10.1117/1.JBO.29.7.076003","url":null,"abstract":"<p><strong>Significance: </strong>Tissues' biomechanical properties, such as elasticity, are related to tissue health. Optical coherence elastography produces images of tissues based on their elasticity, but its performance is constrained by the laser power used, working distance, and excitation methods.</p><p><strong>Aim: </strong>We develop a new method to reconstruct the elasticity contrast image over a long working distance, with only low-intensity illumination, and by non-contact acoustic wave excitation.</p><p><strong>Approach: </strong>We combine single-photon vibrometry and quantum parametric mode sorting (QPMS) to measure the oscillating backscattered signals at a single-photon level and derive the phantoms' relative elasticity.</p><p><strong>Results: </strong>We test our system on tissue-mimicking phantoms consisting of contrast sections with different concentrations and thus stiffness. Our results show that as the driving acoustic frequency is swept, the phantoms' vibrational responses are mapped onto the photon-counting histograms from which their mechanical properties-including elasticity-can be derived. Through lateral and longitudinal laser scanning at a fixed frequency, a contrast image based on samples' elasticity can be reliably reconstructed upon photon level signals.</p><p><strong>Conclusions: </strong>We demonstrated the reliability of QPMS-based elasticity contrast imaging of agar phantoms in a long working distance, low-intensity environment. This technique has the potential for in-depth images of real biological tissue and provides a new approach to elastography research and applications.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076003"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11234449/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141579808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing spectral effectiveness in color fundus photography for deep learning classification of retinopathy of prematurity. 评估用于早产儿视网膜病变深度学习分类的彩色眼底照片的光谱有效性。
IF 3 3区 医学
Journal of Biomedical Optics Pub Date : 2024-07-01 Epub Date: 2024-06-18 DOI: 10.1117/1.JBO.29.7.076001
Behrouz Ebrahimi, David Le, Mansour Abtahi, Albert K Dadzie, Alfa Rossi, Mojtaba Rahimi, Taeyoon Son, Susan Ostmo, J Peter Campbell, R V Paul Chan, Xincheng Yao
{"title":"Assessing spectral effectiveness in color fundus photography for deep learning classification of retinopathy of prematurity.","authors":"Behrouz Ebrahimi, David Le, Mansour Abtahi, Albert K Dadzie, Alfa Rossi, Mojtaba Rahimi, Taeyoon Son, Susan Ostmo, J Peter Campbell, R V Paul Chan, Xincheng Yao","doi":"10.1117/1.JBO.29.7.076001","DOIUrl":"10.1117/1.JBO.29.7.076001","url":null,"abstract":"<p><strong>Significance: </strong>Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities.</p><p><strong>Aim: </strong>This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP.</p><p><strong>Approach: </strong>A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared.</p><p><strong>Results: </strong>For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture.</p><p><strong>Conclusions: </strong>This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076001"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11188587/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141442764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrasound and diffuse optical tomography-transformer model for assessing pathological complete response to neoadjuvant chemotherapy in breast cancer. 用于评估乳腺癌新辅助化疗病理完全反应的超声和弥散光学断层扫描-转换器模型。
IF 3 3区 医学
Journal of Biomedical Optics Pub Date : 2024-07-01 Epub Date: 2024-07-24 DOI: 10.1117/1.JBO.29.7.076007
Yun Zou, Minghao Xue, Md Iqbal Hossain, Quing Zhu
{"title":"Ultrasound and diffuse optical tomography-transformer model for assessing pathological complete response to neoadjuvant chemotherapy in breast cancer.","authors":"Yun Zou, Minghao Xue, Md Iqbal Hossain, Quing Zhu","doi":"10.1117/1.JBO.29.7.076007","DOIUrl":"https://doi.org/10.1117/1.JBO.29.7.076007","url":null,"abstract":"<p><strong>Significance: </strong>We evaluate the efficiency of integrating ultrasound (US) and diffuse optical tomography (DOT) images for predicting pathological complete response (pCR) to neoadjuvant chemotherapy (NAC) in breast cancer patients. The ultrasound-diffuse optical tomography (USDOT)-Transformer model represents a significant step toward accurate prediction of pCR, which is critical for personalized treatment planning.</p><p><strong>Aim: </strong>We aim to develop and assess the performance of the USDOT-Transformer model, which combines US and DOT images with tumor receptor biomarkers to predict the pCR of breast cancer patients under NAC.</p><p><strong>Approach: </strong>We developed the USDOT-Transformer model using a dual-input transformer to process co-registered US and DOT images along with tumor receptor biomarkers. Our dataset comprised imaging data from 60 patients at multiple time points during their chemotherapy treatment. We used fivefold cross-validation to assess the model's performance, comparing its results against a single modality of US or DOT.</p><p><strong>Results: </strong>The USDOT-Transformer model demonstrated excellent predictive performance, with a mean area under the receiving characteristic curve of 0.96 (95%CI: 0.93 to 0.99) across the fivefold cross-validation. The integration of US and DOT images significantly enhanced the model's ability to predict pCR, outperforming models that relied on a single imaging modality (0.87 for US and 0.82 for DOT). This performance indicates the potential of advanced deep learning techniques and multimodal imaging data for improving the accuracy (ACC) of pCR prediction.</p><p><strong>Conclusion: </strong>The USDOT-Transformer model offers a promising non-invasive approach for predicting pCR to NAC in breast cancer patients. By leveraging the structural and functional information from US and DOT images, the model offers a faster and more reliable tool for personalized treatment planning. Future work will focus on expanding the dataset and refining the model to further improve its accuracy and generalizability.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076007"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11268382/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141758978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信