{"title":"Evaluating tissue mechanical properties using Mueller matrix polarimetry","authors":"Jiahao Fan, Honghui He, Hui Ma","doi":"10.1117/12.2691003","DOIUrl":"https://doi.org/10.1117/12.2691003","url":null,"abstract":"Evaluating tissue mechanical properties is an important issue in the biomedical field. While traditional in vitro tissue deformation experiments have been used to measure mechanical properties, optical methods are becoming increasingly popular due to their non-invasive and non-contact advantages. In this study, we utilized Mueller matrix polarimetry to quantify the mechanical properties of bovine tendon tissue. We acquired 3×3 Mueller matrix images of the tendon tissue samples under various stretching states using a backscattering measurement setup based on a polarization camera, enabling us to examine changes in both structural information and optical properties. Subsequently, we extracted frequency distribution histograms of Mueller matrix elements to elucidate the structural changes in the tendon tissue during the stretching process. We then calculated the Mueller matrix transformation parameters, namely the total anisotropy t1 and anisotropy direction α1 of the tendon tissue samples under different stretching processes, to characterize their structural changes quantitatively. For better discrimination of tendon tissues under different stretching states, we trained an image classification neural network using the derived MMT parameters as input. Ultimately, we obtained a highly accurate model with 90% precision. The results demonstrate the potential of Mueller matrix polarimetry as a tool for evaluating tissue mechanical properties.","PeriodicalId":164997,"journal":{"name":"Conference on Biomedical Photonics and Cross-Fusion","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122713728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multifunctional optical tomography system combining surface extraction and 3D fluorescence reconstruction","authors":"Yanan Wu, Shengyu Gao, Linlin Li, Jianru Zhang, Qian Hu, Xin Lou, Xinjun Zhu, Jiahua Jiang, Wuwei Ren","doi":"10.1117/12.2692026","DOIUrl":"https://doi.org/10.1117/12.2692026","url":null,"abstract":"Macroscopic-level diffuse optical imaging has been widely used in small animal imaging for preclinical research. Due to severe light scattering, 3D reconstruction in diffuse optics is highly ill-posed and sensitive to small noise in measurement. Bringing prior information such as the inner structural or surface information of the imaging object may largely reduce the ill-posed nature of the inverse problem and improve the reconstruction accuracy. Most existing solutions use additional equipment or multimodal techniques (e.g., CT, MRI, etc.). However, these methods pose new challenges such as increased cost and image alignment between different modalities. Herein, we present a novel compact optical tomography system that enables surface extraction using a single programmable scanning module and pinhole modeling. Experiments on phantom and mice show that the system is capable of achieving high-fidelity surface extraction with a minimal error of less than 0.1 mm, which in turn improves the accuracy of 3D fluorescence reconstruction","PeriodicalId":164997,"journal":{"name":"Conference on Biomedical Photonics and Cross-Fusion","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121492010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Longyun Zhu, Y. Xu, Jialing Huang, Yining Wang, Zhicong Li, Ya-Wei Wang
{"title":"Study on the influencing factors of structured light digital holography","authors":"Longyun Zhu, Y. Xu, Jialing Huang, Yining Wang, Zhicong Li, Ya-Wei Wang","doi":"10.1117/12.2691140","DOIUrl":"https://doi.org/10.1117/12.2691140","url":null,"abstract":"Digital holographic microscopy (DHM) is an efficient optical measurement and imaging technology with the advantages of non- invasion, non-damage, high sensitivity and high resolution. However, its resolution is still limited by the diffraction limit of the system. The structured light illumination microscopy (SIM) is a good super resolution imaging technology, which obtains high frequency information of objects by changing the lighting mode to achieve improved imaging resolution. In order to discuss the factors that affect structured light phase imaging, we first simulated the entire imaging process of a structured light digital holography system using Matlab software, and then systematically analyzed the effects of four factors, structured light spatial frequency, loading direction, modulation level, and noise level, on the imaging situation, and reached corresponding conclusions. The above research results can provide a reference for the study of structured light phase imaging methods.","PeriodicalId":164997,"journal":{"name":"Conference on Biomedical Photonics and Cross-Fusion","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132206909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-functional accessory toolkit for Miniscope prototyping and image enhancement","authors":"Xinyi Zhu, Liang Gu, Ruiping Li, Liang Chen, Jingying Chen, Ning Zhou, Wuwei Ren","doi":"10.1117/12.2691914","DOIUrl":"https://doi.org/10.1117/12.2691914","url":null,"abstract":"During the last decade, Miniaturized microscopy, or Miniscope, has gained popularity in neuroscience, particularly for behavioral studies in awake rodents. However, image quality control and standardization remain challenging for both users and developers. To address these challenges, we present MiniMounter, a cost-effective and multi-functional accessory toolkit that includes a hardware platform with customized grippers and four-degree-of-freedom adjustment for Miniscope, as well as software for displacement control and image quality evaluation. Our toolkit enables auto-focusing and accurate measurement of spatial resolution and field of view (FOV). We have demonstrated the effectiveness of such a toolkit through comprehensive phantom and animal experiments.","PeriodicalId":164997,"journal":{"name":"Conference on Biomedical Photonics and Cross-Fusion","volume":"338 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123339202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Iterative-in-iterative super-resolution biomedical imaging using one real image","authors":"Yuanzheng Ma, Xinyue Wang, Benqi Zhao, Ying Xiao, Shijie Deng, Jian Song, Xun Guan","doi":"10.1117/12.2691281","DOIUrl":"https://doi.org/10.1117/12.2691281","url":null,"abstract":"Deep learning-based super-resolution models have the potential to revolutionize biomedical imaging and diagnoses by effectively tackling various challenges associated with early detection, personalized medicine, and clinical automation. However, the requirement of an extensive collection of high-resolution images presents limitations for widespread adoption in clinical practice. In our experiment, we proposed an approach to effectively train the deep learning-based super-resolution models using only one real image by leveraging self-generated high resolution images. We employed a mixed metric of image screening to automatically select images with a distribution similar to ground truth, creating an incrementally curated training data set that encourages the model to generate improved images over time. After five training iterations, the proposed deep learning-based super-resolution model experienced a 7.5% and 5.49% improvement in structural similarity and peak-signal-to-noise ratio, respectively. Significantly, the model consistently produces visually enhanced results for training, improving its performance while preserving the characteristics of original biomedical images. These findings indicate a potential way to train a deep neural network in a self-revolution manner independent of real-world human data.","PeriodicalId":164997,"journal":{"name":"Conference on Biomedical Photonics and Cross-Fusion","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115530716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}