Proceedings of SPIE--the International Society for Optical Engineering最新文献

筛选
英文 中文
Enhancing Colorectal Cancer Tumor Bud Detection Using Deep Learning from Routine H&E-Stained Slides. 利用深度学习从常规 H&E 染色切片中提高结直肠癌肿瘤芽检测能力
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-03 DOI: 10.1117/12.3006796
Usama Sajjad, Wei Chen, Mostafa Rezapour, Ziyu Su, Thomas Tavolara, Wendy L Frankel, Metin N Gurcan, M Khalid Khan Niazi
{"title":"Enhancing Colorectal Cancer Tumor Bud Detection Using Deep Learning from Routine H&E-Stained Slides.","authors":"Usama Sajjad, Wei Chen, Mostafa Rezapour, Ziyu Su, Thomas Tavolara, Wendy L Frankel, Metin N Gurcan, M Khalid Khan Niazi","doi":"10.1117/12.3006796","DOIUrl":"10.1117/12.3006796","url":null,"abstract":"<p><p>Tumor budding refers to a cluster of one to four tumor cells located at the tumor-invasive front. While tumor budding is a prognostic factor for colorectal cancer, counting and grading tumor budding are time consuming and not highly reproducible. There could be high inter- and intra-reader disagreement on H&E evaluation. This leads to the noisy training (imperfect ground truth) of deep learning algorithms, resulting in high variability and losing their ability to generalize on unseen datasets. Pan-cytokeratin staining is one of the potential solutions to enhance the agreement, but it is not routinely used to identify tumor buds and can lead to false positives. Therefore, we aim to develop a weakly-supervised deep learning method for tumor bud detection from routine H&E-stained images that does not require strict tissue-level annotations. We also propose <i>Bayesian Multiple Instance Learning</i> (BMIL) that combines multiple annotated regions during the training process to further enhance the generalizability and stability in tumor bud detection. Our dataset consists of 29 colorectal cancer H&E-stained images that contain 115 tumor buds per slide on average. In six-fold cross-validation, our method demonstrated an average precision and recall of 0.94, and 0.86 respectively. These results provide preliminary evidence of the feasibility of our approach in improving the generalizability in tumor budding detection using H&E images while avoiding the need for non-routine immunohistochemical staining methods.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12933 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095418/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAFES: Chest X-ray Analysis using Federated Self-supervised Learning for Pediatric COVID-19 Detection. CAFES:利用联合自监督学习进行胸部X光分析,以检测小儿COVID-19。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-03 DOI: 10.1117/12.3008757
Abhijeet Parida, Syed Muhammad Anwar, Malhar P Patel, Mathias Blom, Tal Tiano Einat, Alex Tonetti, Yuval Baror, Ittai Dayan, Marius George Linguraru
{"title":"CAFES: Chest X-ray Analysis using Federated Self-supervised Learning for Pediatric COVID-19 Detection.","authors":"Abhijeet Parida, Syed Muhammad Anwar, Malhar P Patel, Mathias Blom, Tal Tiano Einat, Alex Tonetti, Yuval Baror, Ittai Dayan, Marius George Linguraru","doi":"10.1117/12.3008757","DOIUrl":"10.1117/12.3008757","url":null,"abstract":"<p><p>Chest X-rays (CXRs) play a pivotal role in cost-effective clinical assessment of various heart and lung related conditions. The urgency of COVID-19 diagnosis prompted their use in identifying conditions like lung opacity, pneumonia, and acute respiratory distress syndrome in pediatric patients. We propose an AI-driven solution for binary COVID-19 versus non-COVID-19 classification in pediatric CXRs. We present a Federated Self-Supervised Learning (FSSL) framework to enhance Vision Transformer (ViT) performance for COVID-19 detection in pediatric CXRs. ViT's prowess in vision-related binary classification tasks, combined with self-supervised pre-training on adult CXR data, forms the basis of the FSSL approach. We implement our strategy on the Rhino Health Federated Computing Platform (FCP), which ensures privacy and scalability for distributed data. The chest X-ray analysis using the federated SSL (CAFES) model, utilizes the FSSL-pre-trained ViT weights and demonstrated gains in accurately detecting COVID-19 when compared with a fully supervised model. Our FSSL-pre-trained ViT showed an area under the precision-recall curve (AUPR) of 0.952, which is 0.231 points higher than the fully supervised model for COVID-19 diagnosis using pediatric data. Our contributions include leveraging vision transformers for effective COVID-19 diagnosis from pediatric CXRs, employing distributed federated learning-based self-supervised pre-training on adult data, and improving pediatric COVID-19 diagnosis performance. This privacy-conscious approach aligns with HIPAA guidelines, paving the way for broader medical imaging applications.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12927 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11167651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How accurately can quantitative imaging methods be ranked without ground truth: An upper bound on no-gold-standard evaluation. 在没有基础真理的情况下,定量成像方法的排名有多准确:无金标准评价的上界。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-03-29 DOI: 10.1117/12.3006888
Yan Liu, Abhinav K Jha
{"title":"How accurately can quantitative imaging methods be ranked without ground truth: An upper bound on no-gold-standard evaluation.","authors":"Yan Liu, Abhinav K Jha","doi":"10.1117/12.3006888","DOIUrl":"10.1117/12.3006888","url":null,"abstract":"<p><p>Objective evaluation of quantitative imaging (QI) methods with patient data, while important, is typically hindered by the lack of gold standards. To address this challenge, no-gold-standard evaluation (NGSE) techniques have been proposed. These techniques have demonstrated efficacy in accurately ranking QI methods without access to gold standards. The development of NGSE methods has raised an important question: how accurately can QI methods be ranked without ground truth. To answer this question, we propose a Cramér-Rao bound (CRB)-based framework that quantifies the upper bound in ranking QI methods without any ground truth. We present the application of this framework in guiding the use of a well-known NGSE technique, namely the regression-without-truth (RWT) technique. Our results show the utility of this framework in quantifying the performance of this NGSE technique for different patient numbers. These results provide motivation towards studying other applications of this upper bound.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12929 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11601990/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142752608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hyperspectral surgical microscope with super-resolution reconstruction for intraoperative image guidance. 用于术中图像引导的超分辨率重建超光谱手术显微镜。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3008789
Ling Ma, Kelden Pruitt, Baowei Fei
{"title":"A hyperspectral surgical microscope with super-resolution reconstruction for intraoperative image guidance.","authors":"Ling Ma, Kelden Pruitt, Baowei Fei","doi":"10.1117/12.3008789","DOIUrl":"10.1117/12.3008789","url":null,"abstract":"<p><p>Hyperspectral imaging (HSI) is an emerging imaging modality in medical applications, especially for intraoperative image guidance. A surgical microscope improves surgeons' visualization with fine details during surgery. The combination of HSI and surgical microscope can provide a powerful tool for surgical guidance. However, to acquire high-resolution hyperspectral images, the long integration time and large image file size can be a burden for intraoperative applications. Super-resolution reconstruction allows acquisition of low-resolution hyperspectral images and generates high-resolution HSI. In this work, we developed a hyperspectral surgical microscope and employed our unsupervised super-resolution neural network, which generated high-resolution hyperspectral images with fine textures and spectral characteristics of tissues. The proposed method can reduce the acquisition time and save storage space taken up by hyperspectral images without compromising image quality, which will facilitate the adaptation of hyperspectral imaging technology in intraoperative image guidance.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11093589/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140924175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Web-based Software for CT Quality Control Testing of Low-contrast Detectability using Model Observers. 利用模型观测器对低对比度可探测性进行 CT 质量控制测试的自动化网络软件。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-03 DOI: 10.1117/12.3008777
Zhongxing Zhou, Jarod Wellinghoff, Mingdong Fan, Scott Hsieh, David Holmes, Cynthia H McCollough, Lifeng Yu
{"title":"Automated Web-based Software for CT Quality Control Testing of Low-contrast Detectability using Model Observers.","authors":"Zhongxing Zhou, Jarod Wellinghoff, Mingdong Fan, Scott Hsieh, David Holmes, Cynthia H McCollough, Lifeng Yu","doi":"10.1117/12.3008777","DOIUrl":"https://doi.org/10.1117/12.3008777","url":null,"abstract":"<p><p>The Channelized Hotelling observer (CHO) is well correlated with human observer performance in many CT detection/classification tasks but has not been widely adopted in routine CT quality control and performance evaluation, mainly because of the lack of an easily available, efficient, and validated software tool. We developed a highly automated solution - CT image quality evaluation and Protocol Optimization (CTPro), a web-based software platform that includes CHO and other traditional image quality assessment tools such as modulation transfer function and noise power spectrum. This tool can allow easy access to the CHO for both the research and clinical community and enable efficient, accurate image quality evaluation without the need of installing additional software. Its application was demonstrated by comparing the low-contrast detectability on a clinical photon-counting-detector (PCD)-CT with a traditional energy-integrating-detector (EID)-CT, which showed UHR-T3D had 6.2% higher d' than EID-CT with IR (p = 0.047) and 4.1% lower d' without IR (p = 0.122).</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11008424/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140874176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tractography with T1-weighted MRI and associated anatomical constraints on clinical quality diffusion MRI. T1 加权核磁共振成像的断层扫描以及临床质量弥散核磁共振成像的相关解剖限制。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3006286
Tian Yu, Yunhe Li, Michael E Kim, Chenyu Gao, Qi Yang, Leon Y Cai, Susane M Resnick, Lori L Beason-Held, Daniel C Moyer, Kurt G Schilling, Bennett A Landman
{"title":"Tractography with T1-weighted MRI and associated anatomical constraints on clinical quality diffusion MRI.","authors":"Tian Yu, Yunhe Li, Michael E Kim, Chenyu Gao, Qi Yang, Leon Y Cai, Susane M Resnick, Lori L Beason-Held, Daniel C Moyer, Kurt G Schilling, Bennett A Landman","doi":"10.1117/12.3006286","DOIUrl":"10.1117/12.3006286","url":null,"abstract":"<p><p>Diffusion MRI (dMRI) streamline tractography, the gold-standard for in vivo estimation of white matter (WM) pathways in the brain, has long been considered as a product of WM microstructure. However, recent advances in tractography demonstrated that convolutional recurrent neural networks (CoRNN) trained with a teacher-student framework have the ability to learn to propagate streamlines directly from T1 and anatomical context. Training for this network has previously relied on high resolution dMRI. In this paper, we generalize the training mechanism to traditional clinical resolution data, which allows generalizability across sensitive and susceptible study populations. We train CoRNN on a small subset of the Baltimore Longitudinal Study of Aging (BLSA), which better resembles clinical scans. We define a metric, termed the epsilon ball seeding method, to compare T1 tractography and traditional diffusion tractography at the streamline level. We show that under this metric T1 tractography generated by CoRNN reproduces diffusion tractography with approximately three millimeters of error.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11364406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142115752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fourier Diffusion for Sparse CT Reconstruction. 用于稀疏 CT 重建的傅立叶扩散。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-01 DOI: 10.1117/12.3008622
Anqi Liu, Grace J Gang, J Webster Stayman
{"title":"Fourier Diffusion for Sparse CT Reconstruction.","authors":"Anqi Liu, Grace J Gang, J Webster Stayman","doi":"10.1117/12.3008622","DOIUrl":"10.1117/12.3008622","url":null,"abstract":"<p><p>Sparse CT reconstruction continues to be an area of interest in a number of novel imaging systems. Many different approaches have been tried including model-based methods, compressed sensing approaches, and most recently deep-learning-based processing. Diffusion models, in particular, have become extremely popular due to their ability to effectively encode rich information about images and to allow for posterior sampling to generate many possible outputs. One drawback of diffusion models is that their recurrent structure tends to be computationally expensive. In this work we apply a new Fourier diffusion approach that permits processing with many fewer time steps than the standard scalar diffusion model. We present an extension of the Fourier diffusion technique and evaluate it in a simulated breast cone-beam CT system with a sparse view acquisition.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378968/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of data uncertainty for deep-learning-based CT noise reduction using ensemble patient data and a virtual imaging trial framework. 利用集合患者数据和虚拟成像试验框架,评估基于深度学习的 CT 降噪的数据不确定性。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-01 DOI: 10.1117/12.3008581
Zhongxing Zhou, Scott S Hsieh, Hao Gong, Cynthia H McCollough, Lifeng Yu
{"title":"Evaluation of data uncertainty for deep-learning-based CT noise reduction using ensemble patient data and a virtual imaging trial framework.","authors":"Zhongxing Zhou, Scott S Hsieh, Hao Gong, Cynthia H McCollough, Lifeng Yu","doi":"10.1117/12.3008581","DOIUrl":"https://doi.org/10.1117/12.3008581","url":null,"abstract":"<p><p>Deep learning-based image reconstruction and noise reduction (DLIR) methods have been increasingly deployed in clinical CT. Accurate assessment of their data uncertainty properties is essential to understand the stability of DLIR in response to noise. In this work, we aim to evaluate the data uncertainty of a DLIR method using real patient data and a virtual imaging trial framework and compare it with filtered-backprojection (FBP) and iterative reconstruction (IR). The ensemble of noise realizations was generated by using a realistic projection domain noise insertion technique. The impact of varying dose levels and denoising strengths were investigated for a ResNet-based deep convolutional neural network (DCNN) model trained using patient images. On the uncertainty maps, DCNN shows more detailed structures than IR although its bias map has less structural dependency, which implies that DCNN is more sensitive to small changes in the input. Both visual examples and histogram analysis demonstrated that hotspots of uncertainty in DCNN may be associated with a higher chance of distortion from the truth than IR, but it may also correspond to a better detection performance for some of the small structures.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11008675/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting human effort needed to correct auto-segmentations. 预测纠正自动分区所需的人力。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3006471
Da He, Jayaram K Udupa, Yubing Tong, Drew A Torigian
{"title":"Predicting human effort needed to correct auto-segmentations.","authors":"Da He, Jayaram K Udupa, Yubing Tong, Drew A Torigian","doi":"10.1117/12.3006471","DOIUrl":"10.1117/12.3006471","url":null,"abstract":"<p><p>Medical image auto-segmentation techniques are basic and critical for numerous image-based analysis applications that play an important role in developing advanced and personalized medicine. Compared with manual segmentations, auto-segmentations are expected to contribute to a more efficient clinical routine and workflow by requiring fewer human interventions or revisions to auto-segmentations. However, current auto-segmentation methods are usually developed with the help of some popular segmentation metrics that do not directly consider human correction behavior. Dice Coefficient (DC) focuses on the truly-segmented areas, while Hausdorff Distance (HD) only measures the maximal distance between the auto-segmentation boundary with the ground truth boundary. Boundary length-based metrics such as surface DC (surDC) and Added Path Length (APL) try to distinguish truly-predicted boundary pixels and wrong ones. It is uncertain if these metrics can reliably indicate the required manual mending effort for application in segmentation research. Therefore, in this paper, the potential use of the above four metrics, as well as a novel metric called Mendability Index (MI), to predict the human correction effort is studied with linear and support vector regression models. 265 3D computed tomography (CT) samples for 3 objects of interest from 3 institutions with corresponding auto-segmentations and ground truth segmentations are utilized to train and test the prediction models. The five-fold cross-validation experiments demonstrate that meaningful human effort prediction can be achieved using segmentation metrics with varying prediction errors for different objects. The improved variant of MI, called MIhd, generally shows the best prediction performance, suggesting its potential to indicate reliably the clinical value of auto-segmentations.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12931 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11218903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nucleus subtype classification using inter-modality learning. 利用跨模态学习进行核亚型分类
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-03 DOI: 10.1117/12.3006237
Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman
{"title":"Nucleus subtype classification using inter-modality learning.","authors":"Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman","doi":"10.1117/12.3006237","DOIUrl":"10.1117/12.3006237","url":null,"abstract":"<p><p>Understanding the way cells communicate, co-locate, and interrelate is essential to understanding human physiology. Hematoxylin and eosin (H&E) staining is ubiquitously available both for clinical studies and research. The Colon Nucleus Identification and Classification (CoNIC) Challenge has recently innovated on robust artificial intelligence labeling of six cell types on H&E stains of the colon. However, this is a very small fraction of the number of potential cell classification types. Specifically, the CoNIC Challenge is unable to classify epithelial subtypes (progenitor, endocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), or connective subtypes (fibroblasts, stromal). In this paper, we propose to use inter-modality learning to label previously un-labelable cell types on virtual H&E. We leveraged multiplexed immunofluorescence (MxIF) histology imaging to identify 14 subclasses of cell types. We performed style transfer to synthesize virtual H&E from MxIF and transferred the higher density labels from MxIF to these virtual H&E images. We then evaluated the efficacy of learning in this approach. We identified helper T and progenitor nuclei with positive predictive values of 0.34 ± 0.15 (prevalence 0.03 ± 0.01) and 0.47 ± 0.1 (prevalence 0.07 ± 0.02) respectively on virtual H&E. This approach represents a promising step towards automating annotation in digital pathology.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12933 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11392413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信