Biomedical optics express最新文献

筛选
英文 中文
Lightweight CycleGAN models for cross-modality image transformation and experimental quality assessment in fluorescence microscopy. 轻型CycleGAN模型跨模态图像变换和荧光显微镜实验质量评估。
IF 3.2 2区 医学
Biomedical optics express Pub Date : 2026-02-18 eCollection Date: 2026-03-01 DOI: 10.1364/BOE.578297
Mohammad Soltaninezhad, Yashar Rouzbahani, Jhonatan Contreras, Francisco Paez Larios, Paul M Jordan, Oliver Werz, Rohan Chippalkatti, Daniel Kwaku Abankwa, Christian Eggeling, Thomas Bocklitz
{"title":"Lightweight CycleGAN models for cross-modality image transformation and experimental quality assessment in fluorescence microscopy.","authors":"Mohammad Soltaninezhad, Yashar Rouzbahani, Jhonatan Contreras, Francisco Paez Larios, Paul M Jordan, Oliver Werz, Rohan Chippalkatti, Daniel Kwaku Abankwa, Christian Eggeling, Thomas Bocklitz","doi":"10.1364/BOE.578297","DOIUrl":"https://doi.org/10.1364/BOE.578297","url":null,"abstract":"<p><p>With the growing integration of artificial intelligence in scientific and medical applications, lightweight deep learning models have become increasingly important. These models offer substantial reductions in memory usage and computational time. Given that GPU-based model training and inference contribute significantly to carbon emissions, lightweight architectures with comparable performance to parameter-rich models present a more environmentally friendly alternative. Specifically, we build upon CycleGAN with a fixed-channel lightweight U-Net generator for modality transfer from standard confocal to super-resolution STED and deconvolved STED images, and systematically compare it against Pix2Pix and standard CycleGAN baselines. Obtaining paired datasets in medical imaging and super-resolution microscopy is often infeasible due to the need for additional experiments and the intrinsic complexity of biological sample preparation. To address this, we investigate the performance of lightweight CycleGAN models, demonstrating their ability to achieve high-fidelity modality transfer despite reduced model complexity. We introduce a fixed channel strategy within the U-Net-based generator, in contrast to the traditional channel-doubling approach. This modification significantly reduces the number of trainable parameters from 41.8 million to approximately 9 thousand, while achieving comparable or slightly improved performance. We explore the utility of GAN models as a qualitative marker for assessing experimental and labeling quality. When trained on high-quality microscopy images, the GAN implicitly learns the characteristics of optimal imaging. Deviations between GAN-generated outputs trained on high-quality data and low-quality experimental images can highlight potential issues such as photobleaching, experimental artifacts, or inaccurate labeling. In this way, the model can support qualitative assessment of experimental consistency and image fidelity in fluorescence microscopy workflows.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 3","pages":"1476-1498"},"PeriodicalIF":3.2,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13064623/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147670173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single wavelength, dual temporal offset module for AOSLO systems. 单波长,双时间偏移模块的aoso系统。
IF 3.2 2区 医学
Biomedical optics express Pub Date : 2026-02-17 eCollection Date: 2026-03-01 DOI: 10.1364/BOE.579262
Marcelina Sobczak, Stephen Burns
{"title":"Single wavelength, dual temporal offset module for AOSLO systems.","authors":"Marcelina Sobczak, Stephen Burns","doi":"10.1364/BOE.579262","DOIUrl":"https://doi.org/10.1364/BOE.579262","url":null,"abstract":"<p><p>We designed a generalizable module enabling dual temporal and spatial offset imaging. Two optical channels are generated from a single input. At a single focus, these can be used for blood flow calculations and template-free imaging. The system is supported by custom APD detector systems. Each of the optical channels is imaged onto five detectors through a 1-to-5 fiber bundle to acquire images from one confocal and four offset apertures. The module can be added to existing AOSLO systems at pupil conjugate planes. Examples of customizing the module for the Indiana AOSLO and APAEROS systems are presented.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 3","pages":"1393-1408"},"PeriodicalIF":3.2,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13064607/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147670245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
POF cardiorespiratory sensor combined with a thermal imaging temperature sensor for multimodal emotion recognition. POF心肺传感器结合热成像温度传感器进行多模态情绪识别。
IF 3.2 2区 医学
Biomedical optics express Pub Date : 2026-02-17 eCollection Date: 2026-03-01 DOI: 10.1364/BOE.586279
Yingshuo Bao, Shaobo Yan, Jiaqi Huang, Zhuo Wang, Kun Xiao, Rui Min
{"title":"POF cardiorespiratory sensor combined with a thermal imaging temperature sensor for multimodal emotion recognition.","authors":"Yingshuo Bao, Shaobo Yan, Jiaqi Huang, Zhuo Wang, Kun Xiao, Rui Min","doi":"10.1364/BOE.586279","DOIUrl":"https://doi.org/10.1364/BOE.586279","url":null,"abstract":"<p><p>Accurate emotion recognition is essential for driver-state assessment and mental-health monitoring. Expression-based methods are often unreliable across individuals, and attention has increasingly turned to physiological signals. However, single-modality sensing, despite its precision, cannot provide a comprehensive representation of emotional states. To address this limitation, a multimodal approach was proposed that combines a polymer optical fiber (POF) cardiorespiratory sensor with a thermal image temperature sensor. The POF cardiorespiratory sensor monitors thoracic expansion to derive cardiorespiratory signals, while the thermal image temperature sensor provides high-sensitivity infrared measurements of facial temperature. Under a video-based emotion-induction protocol, effective features were extracted from cardiorespiratory signals and facial thermal time series. These features were fused into a 42-dimensional vector to represent the physiological patterns during emotion fluctuations. Feature-level fusion was evaluated using support vector machine (SVM), K-nearest neighbors (KNN), and random forest (RF) classifiers within a nested cross-validation framework to obtain unbiased generalization estimates. Compared with single-modality baselines, multimodal fusion reduced classification error and achieved peak accuracies of 93% (SVM) under feature selection. These results indicate that integrating portable POF cardiorespiratory sensing with thermal imaging offers a robust and generalizable approach to emotion recognition.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 3","pages":"1377-1392"},"PeriodicalIF":3.2,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13064605/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147670218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combined autofluorescence and diffuse reflectance spectroscopy for rapid metabolic and vascular characterizations of orthotopic tongue tumors in vivo. 自体荧光和漫反射光谱联合用于体内原位舌肿瘤的快速代谢和血管特征。
IF 3.2 2区 医学
Biomedical optics express Pub Date : 2026-02-13 eCollection Date: 2026-03-01 DOI: 10.1364/BOE.589203
Pranto Soumik Saha, Jing Yan, Sumit Sarker, Md Zahid Hasan, Caigang Zhu
{"title":"Combined autofluorescence and diffuse reflectance spectroscopy for rapid metabolic and vascular characterizations of orthotopic tongue tumors <i>in vivo</i>.","authors":"Pranto Soumik Saha, Jing Yan, Sumit Sarker, Md Zahid Hasan, Caigang Zhu","doi":"10.1364/BOE.589203","DOIUrl":"10.1364/BOE.589203","url":null,"abstract":"<p><p>Precise label-free quantification of tissue metabolic and vascular dynamics <i>in vivo</i> represents a critical challenge for cancer therapy prediction and longitudinal treatment assessment. In this study, we demonstrated a portable autofluorescence and diffuse reflectance spectroscopy device along with novel spectroscopic algorithms to quantify tissue vascular and metabolic parameters of orthotopic head and neck cancer models <i>in vivo</i>. Tissue-mimicking phantom studies were used to verify the dual-modal optical spectroscopy and easy-to-use spectroscopic algorithms for rapid and accurate estimation of tissue oxygen saturation, total hemoglobin contents, and intrinsic optical redox ratio. Animal studies were conducted to demonstrate the feasibility of our technique for rapid functional characterization of small tongue tumors <i>in vivo</i>. Our phantom studies demonstrated that our dual-modal optical spectroscopy, along with novel spectroscopic algorithms, can accurately quantify tissue vascular and metabolic parameters in near real-time. Our <i>in vivo</i> animal studies captured reduced total hemoglobin contents and lower oxygen saturation in orthotopic tongue tumors compared to normal tongue tissues. Our data also showed that mouse tongue tumors with different radiation sensitivities had significantly different intrinsic optical redox ratios. Additionally, we observed elevated Protoporphyrin IX levels in tongue tumors compared with normal tongue tissues. These results demonstrated the potential of our portable dual-modal optical spectroscopy to noninvasively evaluate tumor metabolism and its vascular microenvironment in tongue cancer models for future oral cancer research.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 3","pages":"1359-1376"},"PeriodicalIF":3.2,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13064616/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147670277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Processing pipeline for large optical coherence elastography datasets with quasi-static air-jet excitation: application to human brain tumor tissue. 准静态喷气激励下的大型光学相干弹性成像数据处理管道:在人脑肿瘤组织中的应用。
IF 3.2 2区 医学
Biomedical optics express Pub Date : 2026-02-12 eCollection Date: 2026-03-01 DOI: 10.1364/BOE.584263
Nicolas Detrez, Sazgar Burhan, Jessica Kren, Jakob Matschke, Christian Hagel, Steffen Buschschlüter, Dirk Theisen-Kunde, Matteo Mario Bonsanto, Robert Huber, Ralf Brinkmann
{"title":"Processing pipeline for large optical coherence elastography datasets with quasi-static air-jet excitation: application to human brain tumor tissue.","authors":"Nicolas Detrez, Sazgar Burhan, Jessica Kren, Jakob Matschke, Christian Hagel, Steffen Buschschlüter, Dirk Theisen-Kunde, Matteo Mario Bonsanto, Robert Huber, Ralf Brinkmann","doi":"10.1364/BOE.584263","DOIUrl":"https://doi.org/10.1364/BOE.584263","url":null,"abstract":"<p><p>Optical coherence elastography (OCE) is a powerful imaging modality for assessing the mechanical properties of biological tissues. We employed an OCE system based on an Optores OMES 3.2 MHz OCT platform combined with an in-house developed air-jet excitation source to characterize healthy and tumorous (meningioma) human brain tissue. This paper presents a comprehensive software framework for processing large OCE datasets, enabling robust extraction of characteristic features from phase-derived displacement data and calculation of mechanical proxy parameters for detailed tissue characterization. Feature detection is achieved using a modified triangle threshold algorithm applied to the displacement curves from the OCE phase data. Extensive pre- and post-processing steps, including percentile-based filtering and adaptive histogram equalization, are applied to mitigate phase unwrapping errors and enhance visualization of the high dynamic range of OCE data. Exemplary measurements on human brain tumor samples demonstrate the framework's ability to differentiate between tissue types, highlighting its potential for future clinical and research applications.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 3","pages":"1335-1358"},"PeriodicalIF":3.2,"publicationDate":"2026-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13064600/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147670196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bessel beam side lobe suppression via non-degenerate two-photon excitation. 非简并双光子激发的贝塞尔光束旁瓣抑制。
IF 3.2 2区 医学
Biomedical optics express Pub Date : 2026-02-11 eCollection Date: 2026-03-01 DOI: 10.1364/BOE.579250
Stephen Tucker, Ezra Guralnik, Shy Shoham
{"title":"Bessel beam side lobe suppression via non-degenerate two-photon excitation.","authors":"Stephen Tucker, Ezra Guralnik, Shy Shoham","doi":"10.1364/BOE.579250","DOIUrl":"https://doi.org/10.1364/BOE.579250","url":null,"abstract":"<p><p>Bessel beams are commonly used in two-photon microscopy to extend the depth of field and thereby achieve functional volumetric imaging of the living brain. In practice, this approach suffers from background signals and limited lateral resolution due to the Bessel beam's strong side lobes. We introduce and demonstrate a new approach to side lobe suppression based on non-degenerate two-photon excitation, in which dual wavelength illumination produces an imaging point-spread function that is the product of the two coaxial Bessel beams. This technique can reduce the main side lobe intensity of a Bessel beam by 50% or more. We illustrate the approach conceptually with an analytical paraxial model and use detailed physical simulation to show that the approach is effective in the presence of the symmetry-breaking aberrations that amplify side lobes in high NA systems. We experimentally demonstrated the technique using a refractive axicon and the pump and tunable beams of a femtosecond laser. This work establishes non-degenerate two-photon excitation as a practical and broadly applicable strategy for improving point spread-function quality in high-resolution volumetric microscopy.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 3","pages":"1310-1318"},"PeriodicalIF":3.2,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13064602/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147670319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the 3D architecture of brain tissue using digital holographic microscopy. 利用数字全息显微镜探索脑组织的三维结构。
IF 3.2 2区 医学
Biomedical optics express Pub Date : 2026-02-11 eCollection Date: 2026-03-01 DOI: 10.1364/BOE.578659
Dennis Scheidt, Alejandro V Arzola, Luisa Del Carmen García, Claudio Narciso Rámirez, Katrin Amunts, Markus Axer
{"title":"Exploring the 3D architecture of brain tissue using digital holographic microscopy.","authors":"Dennis Scheidt, Alejandro V Arzola, Luisa Del Carmen García, Claudio Narciso Rámirez, Katrin Amunts, Markus Axer","doi":"10.1364/BOE.578659","DOIUrl":"https://doi.org/10.1364/BOE.578659","url":null,"abstract":"<p><p>To understand the complexity of the brain, it is necessary to study its microscopic neuronal architecture and densely packed nerve fibre networks. Techniques based on histological sectioning and staining are often used for this purpose. But they can obscure or destroy valuable information and often require extensive computational post-processing for analyzing histological images. Digital holographic microscopy (DHM) enables phase and volumetric imaging. It is a promising alternative to imaging transparent biological samples with minimal preparation and high resolution. The presented study introduces DHM to image the amplitude and phase of rat brain tissue using the double-sideband (DSB) filtering technique, while reducing phase artifacts through the incorporation of unfiltered holograms into the reconstruction formalism. Combining the reconstructed complex-valued hologram with digital processing and digitally synthesized dark-field and phase contrast filtering - including the computational evaluation of light propagation and autofocusing criteria - enhances two-dimensional structural visualisation and reveals volumetric features. This approach successfully resolves the three-dimensional arrangement of crossing fibre bundles from a single acquired hologram through indirect, depth-resolved localization, which is challenging in many imaging applications. Finally, the technique is shown to be scalable, enabling full brain section scanning while supporting a compact, intrinsically multimodal imaging setup.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 3","pages":"1293-1309"},"PeriodicalIF":3.2,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13068420/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147669602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated lipid detection in spectroscopic optical coherence tomography using a weakly supervised deep learning network. 使用弱监督深度学习网络的光谱光学相干断层扫描中的自动脂质检测。
IF 3.2 2区 医学
Biomedical optics express Pub Date : 2026-02-11 eCollection Date: 2026-03-01 DOI: 10.1364/BOE.585222
Jin Hwan Hwang, Woojin Lee, Jin Hyuk Kim, Ryeong Hyun Kim, Dong Oh Kang, Jin Won Kim, Hongki Yoo, Hyeong Soo Nam
{"title":"Automated lipid detection in spectroscopic optical coherence tomography using a weakly supervised deep learning network.","authors":"Jin Hwan Hwang, Woojin Lee, Jin Hyuk Kim, Ryeong Hyun Kim, Dong Oh Kang, Jin Won Kim, Hongki Yoo, Hyeong Soo Nam","doi":"10.1364/BOE.585222","DOIUrl":"https://doi.org/10.1364/BOE.585222","url":null,"abstract":"<p><p>Reliable identification of lipid distribution is critical for assessing coronary vulnerability, yet conventional optical coherence tomography (OCT) lacks compositional specificity. Expanding OCT into the spectral information through spectroscopic OCT (S-OCT), in combination with deep learning, enables automated, composition-aware tissue characterization without requiring hardware modification. This study aims to develop a weakly supervised deep learning framework for lipid detection and localization from S-OCT data, minimizing the need for dense manual annotation. A ResNet-34 network incorporating convolutional block attention modules (CBAMs) was trained using frame-level binary labels of lipid presence. Gradient-weighted class activation mapping (Grad-CAM) was applied to generate interpretable activation maps highlighting lipid-associated regions. Model predictions were validated against Oil Red O-stained histology of rabbit aortas. The proposed model accurately localized lipid regions with strong spatial correspondence to histology, achieving an arc-level overlap agreement of 83.9%. Comparative analyses confirmed that incorporating spectroscopic information significantly improved lipid detection over conventional OCT. This framework demonstrates the feasibility of spectroscopically enhanced, weakly supervised deep learning for automated lipid detection in intravascular imaging. By enabling efficient lipid screening and spatial interpretation, it establishes a scalable foundation for downstream assessment of lipid burden and clinically relevant plaque characterization, with potential utility for automated risk stratification.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 3","pages":"1279-1292"},"PeriodicalIF":3.2,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13064599/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147670264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CC-DenseSTORM: deep learning enables colorimetry camera-based simultaneous two-color single-molecule localization microscopy with dense emitters. CC-DenseSTORM:深度学习使基于比色法相机的同时双色单分子定位显微镜具有密集的发射器。
IF 3.2 2区 医学
Biomedical optics express Pub Date : 2026-02-11 eCollection Date: 2026-03-01 DOI: 10.1364/BOE.587452
Yaolong Li, Weibing Kuang, Zhengxia Wang, Yingjun Zhang, Zhen-Li Huang
{"title":"CC-DenseSTORM: deep learning enables colorimetry camera-based simultaneous two-color single-molecule localization microscopy with dense emitters.","authors":"Yaolong Li, Weibing Kuang, Zhengxia Wang, Yingjun Zhang, Zhen-Li Huang","doi":"10.1364/BOE.587452","DOIUrl":"https://doi.org/10.1364/BOE.587452","url":null,"abstract":"<p><p>Colorimetry camera-based single-molecule localization microscopy (CC-STORM) employs a simple optical setup to facilitate the simultaneous imaging of two or more targets at the nanoscale, but it suffers from a high data rejection rate. A recently reported deep learning-based algorithm (called CC-DeepSTORM) reduced the data rejection rate of two-color CC-STORM from 70% to 40%, while achieving crosstalk of 1%. However, when applying this algorithm to regions with dense emitters, it faces challenges with structural artifacts and low detection rates. Here, we propose CC-DenseSTORM, featuring an attention-gated standard-convolution U-Net to eliminate structural artifacts and a dual-channel adaptive classification network for robust dye classification. Simulations demonstrate that, even at a high density of 5 emitters/µm<sup>2</sup>, CC-DenseSTORM improves the detection rate by 2-fold compared to CC-DeepSTORM, while maintaining the data rejection rate below 30%. In experimental imaging of multiple myeloma cells, CC-DenseSTORM achieves <1% crosstalk (matching the state of the art), thus enabling simultaneous quantification of the densities of CD38 and BCMA, offering great potential for advancing dual-target immunotherapy.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 3","pages":"1319-1334"},"PeriodicalIF":3.2,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13064594/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147670313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fluorescence lifetime imaging of 5-ALA-induced PpIX and autofluorescence for detecting infiltrating glioblastoma margins in patients. 5- ala诱导PpIX的荧光寿命成像和自身荧光检测患者浸润性胶质母细胞瘤边缘。
IF 3.2 2区 医学
Biomedical optics express Pub Date : 2026-02-10 eCollection Date: 2026-03-01 DOI: 10.1364/BOE.581578
Alexandra C Adams, Alba Alfonso Garcia, Silvia Noble Anbunesan, Lisanne Kraft, Julien Bec, Han Sung Lee, Orin Bloch, Laura Marcu
{"title":"Fluorescence lifetime imaging of 5-ALA-induced PpIX and autofluorescence for detecting infiltrating glioblastoma margins in patients.","authors":"Alexandra C Adams, Alba Alfonso Garcia, Silvia Noble Anbunesan, Lisanne Kraft, Julien Bec, Han Sung Lee, Orin Bloch, Laura Marcu","doi":"10.1364/BOE.581578","DOIUrl":"https://doi.org/10.1364/BOE.581578","url":null,"abstract":"<p><p>5-aminolevulinic acid (5-ALA) is routinely used in neurosurgery to enhance visualization of glioblastoma multiforme (GBM) and enable more extensive tumor resection. However, intraoperative guidance based solely on 5-ALA-induced protoporphyrin IX (PpIX) fluorescence remains limited by qualitative interpretation, low sensitivity, and insufficient specificity for detecting tumor infiltration at surgical margins - regions critical for minimizing recurrence. Recent studies suggest that brain tissue autofluorescence (AF), which reflects biochemical composition and metabolic state, may provide complementary diagnostic information to PpIX and that could improve margin detection. This study investigates whether fluorescence lifetime (FLT) measurements of PpIX, supplemented by FLT of brain tissue autofluorescence, improve detection of tumor infiltration at the resection margin in patients <i>in vivo</i>. Using a mesoscopic fiber-based multispectral fluorescence lifetime imaging (FLIm) device with 355 nm excitation, we simultaneously acquire time-resolved emission from AF channels (390/40 nm and 470/28 nm) and PpIX fluorescence (629/52 nm) in 15 patients with IDH-wild-type GBM, evaluating a total of 86 surgical margins during craniotomy procedures. Logistic regression analysis applied to all available FLIm-derived parameters identified FLT at 390 nm and FLT at 629 nm (PpIX) as the optimal features for tumor detection, achieving an area under the curve (AUC) of 0.85. These results demonstrate that FLT of combined brain AF and PpIX significantly improves sensitivity over conventional visual assessment, highlighting its potential as a quantitative approach to enhance surgical precision and patient outcomes.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 3","pages":"1267-1278"},"PeriodicalIF":3.2,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13064626/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147669652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书