{"title":"Image Texture Based Classification Methods to Mimic Perceptual Models of Search and Localization in Medical Images.","authors":"Diego Andrade, Howard C Gifford, Mini Das","doi":"10.1117/12.3008844","DOIUrl":"10.1117/12.3008844","url":null,"abstract":"<p><p>This study explores the validity of texture-based classification in the early stages of visual search/classification. Initially, we summarize our group's prior findings regarding the prediction of signal detection difficulty based on second-order statistical image texture features in tomographic breast images. Alongside the development of visual search model observers to accurately mimic search and localization in medical images, we continue examining the efficacy of texture-based classification/segmentation methods. We consider both first and second-order features through a combination of texture maps and Gaussian mixture model (GMM). Our aim is to evaluate the advantages of integrating these methods at the early stages of the visual search process, particularly in scenarios where target morphological features may be less apparent or known, as in clinical data. By merging knowledge of imaging physics and texture based GMM, we enhance classification efficiency and refine localization of regions suspected of containing target locations.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12929 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11956787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143756351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lin Guo, Samuel W Remedios, Alexandru Korotcov, Dzung L Pham
{"title":"Self-Supervised Super-Resolution of 2D Pre-clinical MRI Acquisitions.","authors":"Lin Guo, Samuel W Remedios, Alexandru Korotcov, Dzung L Pham","doi":"10.1117/12.3016094","DOIUrl":"10.1117/12.3016094","url":null,"abstract":"<p><p>Animal models are pivotal in disease research and the advancement of therapeutic methods. The translation of results from these models to clinical applications is enhanced by employing technologies which are consistent for both humans and animals, like Magnetic Resonance Imaging (MRI), offering the advantage of longitudinal disease evaluation without compromising animal welfare. However, current animal MRI techniques predominantly employ 2D acquisitions due to constraints related to organ size, scan duration, image quality, and hardware limitations. While 3D acquisitions are feasible, they are constrained by longer scan times and ethical considerations related to extended sedation periods. This study evaluates the efficacy of SMORE, a self-supervised deep learning super-resolution approach, to enhance the through-plane resolution of anisotropic 2D MRI scans into isotropic resolutions. SMORE accomplishes this by self-training with high-resolution in-plane data, thereby eliminating domain discrepancies between the input data and external training sets. The approach is tested on mouse MRI scans acquired across a range of through-plane resolutions. Experimental results show SMORE substantially outperforms traditional interpolation methods. Additionally, we find that pre-training offers a promising approach to reduce processing time without compromising performance.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11613139/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142775502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boah Kim, Tejas Sudharshan Mathai, Ronald M Summers
{"title":"Unsupervised Multi-parametric MRI Registration Using Neural Optimal Transport.","authors":"Boah Kim, Tejas Sudharshan Mathai, Ronald M Summers","doi":"10.1117/12.3006289","DOIUrl":"10.1117/12.3006289","url":null,"abstract":"<p><p>Precise deformable image registration of multi-parametric MRI sequences is necessary for radiologists in order to identify abnormalities and diagnose diseases, such as prostate cancer and lymphoma. Despite recent advances in unsupervised learning-based registration, volumetric medical image registration that requires considering the variety of data distributions is still challenging. To address the problem of multi-parametric MRI sequence data registration, we propose an unsupervised domain-transported registration method, called OTMorph by employing neural optimal transport that learns an optimal transport plan to map different data distributions. We have designed a novel framework composed of a transport module and a registration module: the former transports data distribution from the moving source domain to the fixed target domain, and the latter takes the transported data and provides the deformed moving volume that is aligned with the fixed volume. Through end-to-end learning, our proposed method can effectively learn deformable registration for the volumes in different distributions. Experimental results with abdominal multi-parametric MRI sequence data show that our method has superior performance over around 67-85% in deforming the MRI volumes compared to the existing learning-based methods. Our method is generic in nature and can be used to register inter-/intra-modality images by mapping the different data distributions in network training.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12927 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11450653/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malika Sanhinova, Nazim Haouchine, Steve D Pieper, William M Wells, Tracy A Balboni, Alexander Spektor, Mai Anh Huynh, Jeffrey P Guenette, Bryan Czajkowski, Sarah Caplan, Patrick Doyle, Heejoo Kang, David B Hackney, Ron N Alkalay
{"title":"Registration of Longitudinal Spine CTs for Monitoring Lesion Growth.","authors":"Malika Sanhinova, Nazim Haouchine, Steve D Pieper, William M Wells, Tracy A Balboni, Alexander Spektor, Mai Anh Huynh, Jeffrey P Guenette, Bryan Czajkowski, Sarah Caplan, Patrick Doyle, Heejoo Kang, David B Hackney, Ron N Alkalay","doi":"10.1117/12.3006621","DOIUrl":"10.1117/12.3006621","url":null,"abstract":"<p><p>Accurate and reliable registration of longitudinal spine images is essential for assessment of disease progression and surgical outcome. Implementing a fully automatic and robust registration is crucial for clinical use, however, it is challenging due to substantial change in shape and appearance due to lesions. In this paper we present a novel method to automatically align longitudinal spine CTs and accurately assess lesion progression. Our method follows a two-step pipeline where vertebrae are first automatically localized, labeled and 3D surfaces are generated using a deep learning model, then longitudinally aligned using a Gaussian mixture model surface registration. We tested our approach on 37 vertebrae, from 5 patients, with baseline CTs and 3, 6, and 12 months follow-ups leading to 111 registrations. Our experiment showed accurate registration with an average Hausdorff distance of 0.65 mm and average Dice score of 0.92.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11416858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yadavendra Nln, J K Udupa, D Odhner, T Liu, Y Tong, D A Torigian
{"title":"Anatomic attention regions via optimal anatomy modeling and recognition for DL-based image segmentation.","authors":"Yadavendra Nln, J K Udupa, D Odhner, T Liu, Y Tong, D A Torigian","doi":"10.1117/12.3006771","DOIUrl":"10.1117/12.3006771","url":null,"abstract":"<p><p>Organ segmentation is a crucial task in various medical imaging applications. Many deep learning models have been developed to do this, but they are slow and require a lot of computational resources. To solve this problem, attention mechanisms are used which can locate important objects of interest within medical images, allowing the model to segment them accurately even when there is noise or artifact. By paying attention to specific anatomical regions, the model becomes better at segmentation. Medical images have unique features in the form of anatomical information, which makes them different from natural images. Unfortunately, most deep learning methods either ignore this information or do not use it effectively and explicitly. Combined natural intelligence with artificial intelligence, known as hybrid intelligence, has shown promising results in medical image segmentation, making models more robust and able to perform well in challenging situations. In this paper, we propose several methods and models to find attention regions in medical images for deep learning-based segmentation via non-deep-learning methods. We developed these models and trained them using hybrid intelligence concepts. To evaluate their performance, we tested the models on unique test data and analyzed metrics including false negatives quotient and false positives quotient. Our findings demonstrate that object shape and layout variations can be explicitly learned to create computational models that are suitable for each anatomic object. This work opens new possibilities for advancements in medical image segmentation and analysis.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11218901/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sunwoo Kwak, Hamed Akbari, Jose A Garcia, Suyash Mohan, Christos Davatzikos
{"title":"Fully automatic mpMRI analysis using deep learning predicts peritumoral glioblastoma infiltration and subsequent recurrence.","authors":"Sunwoo Kwak, Hamed Akbari, Jose A Garcia, Suyash Mohan, Christos Davatzikos","doi":"10.1117/12.3001752","DOIUrl":"10.1117/12.3001752","url":null,"abstract":"<p><p>Glioblastoma (GBM) is most aggressive and common adult brain tumor. The standard treatments typically include maximal surgical resection, followed adjuvant radiotherapy and chemotherapy. However, the efficacy of these treatment is often limited, as tumor often infiltrate into the surrounding brain tissue, often extending beyond the radiologically defined margins. This infiltration contributes to the high recurrence rate and poor prognosis associated with GBM patients, necessitating advanced methods for early and accurate detection of tumor infiltration. Despite the great promise traditional supervised machine learning shows in predicting tumor infiltration beyond resectable margins, these methods are heavily reliant on expert-drawn Regions of Interest (ROIs), which are used to construct multi-variate models of different Magnetic Resonance (MR) signal characteristics associated with tumor infiltration. This process is both time consuming and resource intensive. Addressing this limitation, our study proposes a novel integration of fully automatic methods for generating ROIs with deep learning algorithms to create predictive maps of tumor infiltration. This approach uses pre-operative multi-parametric MRI (mpMRI) scans, encompassing T1, T1Gd, T2, T2-FLAIR, and ADC sequences, to fully leverage the knowledge from previously drawn ROIs. Subsequently, a patch based Convolutional Neural Network (CNN) model is trained on these automatically generated ROIs to predict areas of potential tumor infiltration. The performance of this model was evaluated using a leave-one-out cross-validation approach. Generated predictive maps binarized for comparison against post-recurrence mpMRI scans. The model demonstrates robust predictive capability, evidenced by the average cross-validated accuracy of 0.87, specificity of 0.88, and sensitivity of 0.90. Notably, the odds ratio of 8.62 indicates that regions identified as high-risk on the predictive map were significantly more likely to exhibit tumor recurrence than low-risk regions. The proposed method demonstrates that a fully automatic mpMRI analysis using deep learning can successfully predict tumor infiltration in peritumoral region for GBM patients while bypassing the intensive requirement for expert-drawn ROIs.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11089715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140917661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Carlos Prieto, Felicia Miranda, Marcela Gurgel, Luc Anchling, Nathan Hutin, Selene Barone, Najla Al Turkestani, Aron Aliaga, Marilia Yatabe, Jonas Bianchi, Lucia Cevidanes
{"title":"ShapeAXI: Shape Analysis Explainability and Interpretability.","authors":"Juan Carlos Prieto, Felicia Miranda, Marcela Gurgel, Luc Anchling, Nathan Hutin, Selene Barone, Najla Al Turkestani, Aron Aliaga, Marilia Yatabe, Jonas Bianchi, Lucia Cevidanes","doi":"10.1117/12.3007053","DOIUrl":"10.1117/12.3007053","url":null,"abstract":"<p><p>ShapeAXI represents a cutting-edge framework for shape analysis that leverages a multi-view approach, capturing 3D objects from diverse viewpoints and subsequently analyzing them via 2D Convolutional Neural Networks (CNNs). We implement an automatic N-fold cross-validation process and aggregate the results across all folds. This ensures insightful explainability heat-maps for each class across every shape, enhancing interpretability and contributing to a more nuanced understanding of the underlying phenomena. We demonstrate the versatility of ShapeAXI through two targeted classification experiments. The first experiment categorizes condyles into healthy and degenerative states. The second, more intricate experiment, engages with shapes extracted from CBCT scans of cleft patients, efficiently classifying them into four severity classes. This innovative application not only aligns with existing medical research but also opens new avenues for specialized cleft patient analysis, holding considerable promise for both scientific exploration and clinical practice. The rich insights derived from ShapeAXI's explainability images reinforce existing knowledge and provide a platform for fresh discovery in the fields of condyle assessment and cleft patient severity classification. As a versatile and interpretative tool, ShapeAXI sets a new benchmark in 3D object interpretation and classification, and its groundbreaking approach hopes to make significant contributions to research and practical applications across various domains. ShapeAXI is available in our GitHub repository https://github.com/DCBIA-OrthoLab/ShapeAXI.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12931 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11085013/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140913318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nu Ri Choi, Md Ashequr Rahman, Zitong Yu, Barry A Siegel, Abhinav K Jha
{"title":"Can patient-specific acquisition protocol improve performance on defect detection task in myocardial perfusion SPECT?","authors":"Nu Ri Choi, Md Ashequr Rahman, Zitong Yu, Barry A Siegel, Abhinav K Jha","doi":"10.1117/12.3006924","DOIUrl":"10.1117/12.3006924","url":null,"abstract":"<p><p>Myocardial perfusion imaging using single-photon emission computed tomography (SPECT), or myocardial perfusion SPECT (MPS) is a widely used clinical imaging modality for the diagnosis of coronary artery disease. Current clinical protocols for acquiring and reconstructing MPS images are similar for most patients. However, for patients with outlier anatomical characteristics, such as large breasts, images acquired using conventional protocols are often sub-optimal in quality, leading to degraded diagnostic accuracy. Solutions to improve image quality for these patients outside of increased dose or total acquisition time remain challenging. Thus, there is an important need for new methodologies that can help improve the quality of the acquired images for such patients, in terms of the ability to detect myocardial perfusion defects. One approach to improving this performance is adapting the image acquisition protocol specific to each patient. Studies have shown that in MPS, different projection angles usually contain varying amounts of information for the detection task. However, current clinical protocols spend the same time at each projection angle. In this work, we evaluated whether an acquisition protocol that is optimized for each patient could improve performance on the task of defect detection on reconstructed images for patients with outlier anatomical characteristics. For this study, we first designed and implemented a personalized patient-specific protocol-optimization strategy, which we term precision SPECT (PRESPECT). This strategy integrates the theory of ideal observers with the constraints of tomographic reconstruction to optimize the acquisition time for each projection view, such that performance on the task of detecting myocardial perfusion defects is maximized. We performed a clinically realistic simulation study on patients with outlier anatomies on the task of detecting perfusion defects on various realizations of low-dose scans by an anthropomorphic channelized Hotelling observer. Our results show that using PRESPECT led to improved performance on the defect detection task for the considered patients. These results provide evidence that personalization of MPS acquisition protocol has the potential to improve defect detection performance on reconstructed images by anthropomorphic observers for patients with outlier anatomical characteristics. Thus, our findings motivate further research to design optimal patient-specific acquisition and reconstruction protocols for MPS, as well as developing similar approaches for other medical imaging modalities.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12929 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11566828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhao Liu, Chenyang Qi, Shunxing Bao, Quan Liu, Ruining Deng, Yu Wang, Shilin Zhao, Haichun Yang, Yuankai Huo
{"title":"Evaluation Kidney Layer Segmentation on Whole Slide Imaging using Convolutional Neural Networks and Transformers.","authors":"Muhao Liu, Chenyang Qi, Shunxing Bao, Quan Liu, Ruining Deng, Yu Wang, Shilin Zhao, Haichun Yang, Yuankai Huo","doi":"10.1117/12.3006865","DOIUrl":"https://doi.org/10.1117/12.3006865","url":null,"abstract":"<p><p>The segmentation of kidney layer structures, including cortex, outer stripe, inner stripe, and inner medulla within human kidney whole slide images (WSI) plays an essential role in automated image analysis in renal pathology. However, the current manual segmentation process proves labor-intensive and infeasible for handling the extensive digital pathology images encountered at a large scale. In response, the realm of digital renal pathology has seen the emergence of deep learning-based methodologies. However, very few, if any, deep learning based approaches have been applied to kidney layer structure segmentation. Addressing this gap, this paper assesses the feasibility of performing deep learning based approaches on kidney layer structure segmetnation. This study employs the representative convolutional neural network (CNN) and Transformer segmentation approaches, including Swin-Unet, Medical-Transformer, TransUNet, U-Net, PSPNet, and DeepLabv3+. We quantitatively evaluated six prevalent deep learning models on renal cortex layer segmentation using mice kidney WSIs. The empirical results stemming from our approach exhibit compelling advancements, as evidenced by a decent Mean Intersection over Union (mIoU) index. The results demonstrate that Transformer models generally outperform CNN-based models. By enabling a quantitative evaluation of renal cortical structures, deep learning approaches are promising to empower these medical professionals to make more informed kidney layer segmentation.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12933 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12058228/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144053158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaofeng Liu, Fangxu Xing, Jiachen Zhuo, Maureen Stone, Jerry L Prince, Georges El Fakhri, Jonghye Woo
{"title":"Speech Motion Anomaly Detection via Cross-Modal Translation of 4D Motion Fields from Tagged MRI.","authors":"Xiaofeng Liu, Fangxu Xing, Jiachen Zhuo, Maureen Stone, Jerry L Prince, Georges El Fakhri, Jonghye Woo","doi":"10.1117/12.3006874","DOIUrl":"10.1117/12.3006874","url":null,"abstract":"<p><p>Understanding the relationship between tongue motion patterns during speech and their resulting speech acoustic outcomes-i.e., articulatory-acoustic relation-is of great importance in assessing speech quality and developing innovative treatment and rehabilitative strategies. This is especially important when evaluating and detecting abnormal articulatory features in patients with speech-related disorders. In this work, we aim to develop a framework for detecting speech motion anomalies in conjunction with their corresponding speech acoustics. This is achieved through the use of a deep cross-modal translator trained on data from healthy individuals only, which bridges the gap between 4D motion fields obtained from tagged MRI and 2D spectrograms derived from speech acoustic data. The trained translator is used as an anomaly detector, by measuring the spectrogram reconstruction quality on healthy individuals or patients. In particular, the cross-modal translator is likely to yield limited generalization capabilities on patient data, which includes unseen out-of-distribution patterns and demonstrates subpar performance, when compared with healthy individuals. A one-class SVM is then used to distinguish the spectrograms of healthy individuals from those of patients. To validate our framework, we collected a total of 39 paired tagged MRI and speech waveforms, consisting of data from 36 healthy individuals and 3 tongue cancer patients. We used both 3D convolutional and transformer-based deep translation models, training them on the healthy training set and then applying them to both the healthy and patient testing sets. Our framework demonstrates a capability to detect abnormal patient data, thereby illustrating its potential in enhancing the understanding of the articulatory-acoustic relation for both healthy individuals and patients.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142141877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}