2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)最新文献

筛选
英文 中文
MSRT: Multi-Scale Spatial Regularization Transformer For Multi-Label Classification in Calcaneus Radiograph 与骨x线片多标记分类的多尺度空间正则化变压器
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) Pub Date : 2022-03-28 DOI: 10.1109/ISBI52829.2022.9761435
Yuxuan Mu, He Zhao, Jia Guo, Huiqi Li
{"title":"MSRT: Multi-Scale Spatial Regularization Transformer For Multi-Label Classification in Calcaneus Radiograph","authors":"Yuxuan Mu, He Zhao, Jia Guo, Huiqi Li","doi":"10.1109/ISBI52829.2022.9761435","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761435","url":null,"abstract":"Calcaneus fracture is one of the most common fractures which affect daily life quality. However, calcaneus fracture subtype classification is a challenging task due to the nature of multi-label as well as limited annotated data. In this paper, an augmentation strategy called GridDropIn&Out (GDIO) is proposed to increase the uncertainty of the rough input mask and enlarge the dataset. A spatial regularization transformer (SRT) is designed to capture labels' spatial information, while a multi-scale attention SRT (MSRT) is built to synthesize spatial features from different levels. Our final proposal achieves an mAP of 87.54% in classifying six calcaneus fracture types.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"57 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87552987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diabetic Retinopathy Diagnostic CAD System Using 3D-Oct Higher Order Spatial Appearance Model 基于3D-Oct高阶空间外观模型的糖尿病视网膜病变诊断CAD系统
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) Pub Date : 2022-03-28 DOI: 10.1109/ISBI52829.2022.9761508
M. Elsharkawy, A. Sharafeldeen, A. Soliman, F. Khalifa, M. Ghazal, Eman M. El-Daydamony, A. Atwan, H. Sandhu, A. El-Baz
{"title":"Diabetic Retinopathy Diagnostic CAD System Using 3D-Oct Higher Order Spatial Appearance Model","authors":"M. Elsharkawy, A. Sharafeldeen, A. Soliman, F. Khalifa, M. Ghazal, Eman M. El-Daydamony, A. Atwan, H. Sandhu, A. El-Baz","doi":"10.1109/ISBI52829.2022.9761508","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761508","url":null,"abstract":"Diagnoses of Diabetic Retinopathy (DR) at an early stage are of extreme importance so that the retina can be preserved and the risk of substantial damage to the retina or loss of vision is reduced. A new Computer-Aided Diagnosis (CAD) method based on Optical Coherence Tomography (OCT) scans of the retina is presented here for the detection of DR at an early stage. Utilizing an adaptive appearance-based approach that uses prior shape information, the system segments the retinal layers from the 3D-OCT scans. From the layers segmented from the B-scans volume of the OCT, novel texture features are extracted for DR diagnosis. In particular, a 2nd-order reflectivity value is calculated for each individual layer using the 2D Markov-Gibbs Random Field (2D-MGRF) model. Then, Cumulative Distribution Function (CDF) descriptors are used to represent the extracted image-derived feature using CDF’s percentiles. A feed-forward neural network is used for layer-by-layer classification of 3D volume using Gibbs energy features extracted from each individual layer. In the final stage, all twelve layers are fused with a global subject diagnosis based on a majority voting method. We evaluated a 3D-OCT system using 180 subjects using a combination of different k-fold validation techniques. The system performance for this CAD system using 4-, 5-, and 10-fold cross validation achieved accuracies of 89.4%, 91.5%, and 95.7%, respectively. In addition, our system’s ability to detect the DR early has been validated by further comparisons with the state-of-the-art deep learning networks.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"409 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78335426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Decentralized Spatially Constrained Source-Based Morphometry 分散空间约束的基于源的形态测量
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) Pub Date : 2022-03-28 DOI: 10.1109/ISBI52829.2022.9761419
D. K. Saha, Rogers F. Silva, Bradley T. Baker, V. Calhoun
{"title":"Decentralized Spatially Constrained Source-Based Morphometry","authors":"D. K. Saha, Rogers F. Silva, Bradley T. Baker, V. Calhoun","doi":"10.1109/ISBI52829.2022.9761419","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761419","url":null,"abstract":"There is growing interest in extracting multivariate patterns (covarying networks) from structural magnetic resonance imaging (sMRI) data to analyze brain morphometry. Constrained source-based morphometry (constrained SBM) is a hybrid approach which provides a fully automated strategy for extracting subject-specific parameters characterizing gray matter networks. In constrained SBM, constrained independent component analysis (ICA) is used to compute maximally independent sources and statistical analysis is used to identify sources significantly associated with variables of interest. However, constrained SBM is built on the assumption that the data are locally accessible. As such, it cannot take advantage of decentralized (i.e., federated) data. While open data repositories have grown in recent years, there are various reasons (e.g., privacy concerns for rare disease data, institutional or IRB policies, etc.) that restrict a large amount of existing data to local access only. To overcome this limitation, we introduce a novel approach: decentralized constrained source-based morphometry (dcSBM). In our approach, data samples are located at different sites and each site operates the constrained ICA in a distributed manner. Finally, a master node simply aggregates result estimates from each local site and runs the statistical analysis centrally. We apply our method to UK Biobank sMRI data and validate our results by comparing to centralized constrained SBM results.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"50 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78342749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Classification of Microscopic Images of Unstained Skin Samples Using Deep Learning Approach 利用深度学习方法对未染色皮肤样本显微图像进行分类
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) Pub Date : 2022-03-28 DOI: 10.1109/ISBI52829.2022.9761484
KV Rajitha, Sowmya Bhat, PY Prakash, R. Rao, K. Prasad
{"title":"Classification of Microscopic Images of Unstained Skin Samples Using Deep Learning Approach","authors":"KV Rajitha, Sowmya Bhat, PY Prakash, R. Rao, K. Prasad","doi":"10.1109/ISBI52829.2022.9761484","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761484","url":null,"abstract":"Emergence of dermatophytosis pose alarming concerns due to its recurrence and difficulty in management. Diagnostic laboratories are beset with burgeoning challenge of screening multiple specimens for direct microscopy. This has necessitated automation of microscopic image analysis of clinical specimens to augment efficiency and ease in laboratory workflow. Such approaches may be used as point of care facility in the outpatient departments of dermatologists. We identified a robust deep transfer learning model by comparing four popular pre-trained CNN architectures namely EfficientNetB0, VGG16, ResNet50 and MobileNet. Less than 33% of the CNN layers were frozen and the remaining were enabled to learn new features from dermatophyte datasets of clinical origin. EfficientNetB0 outperformed all other models with an accuracy of 98.52%, AUC of 0.99 and F1 score of 0.98 with 97.6% sensitivity and 99.4% specificity. These results with unstained samples are comparable and even better than those from fluorescent stained studies reported earlier.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"17 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86942464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Artifact Identification in Digital Histopathology Images Using Few-Shot Learning 基于少镜头学习的数字组织病理学图像伪影识别
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) Pub Date : 2022-03-28 DOI: 10.1109/ISBI52829.2022.9761648
Nazim N Shaikh, Kamil Wasag, Yao Nie
{"title":"Artifact Identification in Digital Histopathology Images Using Few-Shot Learning","authors":"Nazim N Shaikh, Kamil Wasag, Yao Nie","doi":"10.1109/ISBI52829.2022.9761648","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761648","url":null,"abstract":"The advent of deep learning methods has led to breakthroughs in many digital histopathology image analysis tasks. However, automatic analysis is often impacted by the presence of various artifacts introduced during different tissue and slide processing stages. Therefore, it is desirable to have a generic artifacts identification algorithm to automatically exclude the artifacts regions in the downstream analysis. In this paper, considering the wide diversity of artifacts that present in histopathology images, and the difficulty to obtain a large amount of training data, we frame the artifacts identification task as a tile-based image classification problem and explore the feasibility of using a few-shot learning technique, specifically, prototypical network, for the task. We demonstrate that the use of prototypical network can effectively identify image tiles that contain various artifacts using a very small set of training images. The trained model is also able to generalize well to unseen artifacts. We validate the approach by applying it on both immunohistochemistry and H&E stained tissues images, showing that it is a more favorable approach compared to standard transfer learning for this application.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"8 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85390299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Lesion Detectability and Contrast Enhancement with Beam Multiply and Sum Beamforming for Non-Steered Plane Wave Ultrasound Imaging 无导向平面波超声成像的波束乘和波束形成病变可检测性及对比度增强
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) Pub Date : 2022-03-28 DOI: 10.1109/ISBI52829.2022.9761662
A. N. Madhavanunni, Mahesh Raveendranatha Panicker
{"title":"Lesion Detectability and Contrast Enhancement with Beam Multiply and Sum Beamforming for Non-Steered Plane Wave Ultrasound Imaging","authors":"A. N. Madhavanunni, Mahesh Raveendranatha Panicker","doi":"10.1109/ISBI52829.2022.9761662","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761662","url":null,"abstract":"Diagnostic ultrasound imaging systems typically employ the traditional delay and sum (DAS) beamforming for image reconstruction because of its reduced complexity. But the limited contrast and resolution of DAS beamformed images makes it difficult for the detection of small lesions with a single non-steered plane wave insonification. To address these limitations, this paper demonstrates a novel beamforming technique named as beam multiply and sum (BMAS) for enhancement of contrast and lesion detectability with a non-steered plane wave insonification. The intensity linearity of BMAS is evaluated in-silico and the contrast is evaluated in-vitro and in-vivo. When compared to DAS, BMAS images for in-vitro datasets showed an improvement of 33%, 16.78% and 6.3% in lateral resolution, contrast ratio and contrast-to-noise ratio, respectively.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"14 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87814266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FPL-UDA: Filtered Pseudo Label-Based Unsupervised Cross-Modality Adaptation for Vestibular Schwannoma Segmentation FPL-UDA:基于过滤伪标签的无监督交叉模态自适应前庭神经鞘瘤分割
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) Pub Date : 2022-03-28 DOI: 10.1109/ISBI52829.2022.9761706
Jianghao Wu, Ran Gu, Guiming Dong, Guotai Wang, Shaoting Zhang
{"title":"FPL-UDA: Filtered Pseudo Label-Based Unsupervised Cross-Modality Adaptation for Vestibular Schwannoma Segmentation","authors":"Jianghao Wu, Ran Gu, Guiming Dong, Guotai Wang, Shaoting Zhang","doi":"10.1109/ISBI52829.2022.9761706","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761706","url":null,"abstract":"Automatic segmentation of Vestibular Schwannoma (VS) from Magnetic Resonance Imaging (MRI) will help patient management and improve clinical workflow. This paper aims to adapt a model trained with annotated ceT1 images to segment VS from hrT2 images, without annotations of the latter. The proposed method is named as Filtered Pseudo Label-based Unsupervised Domain Adaptation (FPL-UDA) and consists of three components: 1) an image translator converting hrT2 images to pseudo ceT1 images, where a two-stage translation strategy is proposed to deal with images with VS in various sizes, 2) a pseudo label generator trained with ceT1 images to provide pseudo labels for the pseudo ceT1 images, where a GAN-based data augmentation method is proposed to deal with the domain gap between them, and 3) a final segmentor trained with hrT2 images and the corresponding pseudo labels, where an uncertainty-based filtering is used to select high-quality pseudo labels to improve the segmentor’s robustness. Experimental results with a public VS dataset showed that our method achieved an average Dice of 81.52% for VS segmentation from hrT2 images, which outperformed existing unsupervised cross-modality adaptation methods.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"58 5","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91521476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Multi-Class Brain Tumor Segmentation via 3d and 2d Neural Networks 基于三维和二维神经网络的多类脑肿瘤分割
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) Pub Date : 2022-03-28 DOI: 10.1109/ISBI52829.2022.9761538
Sergey Pnev, V. Groza, B. Tuchinov, E. Amelina, Evgeny Nikolaevich Pavlovskiy, N. Tolstokulakov, M. Amelin, S. Golushko, A. Letyagin
{"title":"Multi-Class Brain Tumor Segmentation via 3d and 2d Neural Networks","authors":"Sergey Pnev, V. Groza, B. Tuchinov, E. Amelina, Evgeny Nikolaevich Pavlovskiy, N. Tolstokulakov, M. Amelin, S. Golushko, A. Letyagin","doi":"10.1109/ISBI52829.2022.9761538","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761538","url":null,"abstract":"Brain tumor segmentation is an important and time-consuming part of the usual clinical diagnosis process. Multi-class segmentation of different tumor types is a challenging task, due to the differences in shape, size, location and scanner parameters. Many 2D and 3D convolution neural network architectures have been proposed to address this problem achieving a significant success. It is well known that 2D approach is generally faster and more popular in the most of such problems. However, the usage of 3D models allows us to simultaneously improve the quality of segmentation. Accounting the context along the sagittal plane leads to the learning of 3-dimensional features that we used for computationally expensive 3D operations what in its turn increases the learning time as well as decreases the speed of operation.In this paper, we compare the 2D and 3D approaches on 2 datasets with MRI images: the one from the BraTS 2020 competition and a private Siberian Brain tumor dataset. In each dataset, any single scan is represented by 4 sequences T1, T1C, T2 and T2-Flair, annotated by two certified neuro-radiologist specialists. The datasets differ from each other in the dimension, grade set and tumor type. Numerical comparison was performed based on the Dice score index. We provide the case by case analysis for the samples that caused most difficulties for the models. The results obtained in our work demonstrate the significant over performing of 3D methods keeping robustness in a regard of data source and type that allow us to get a little closer to AI-assisted diagnosis.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"72 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90961373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Phase Progressive Deep Transfer Learning for Cervical Cancer Dose Map Prediction 宫颈癌剂量图预测的两阶段渐进式深度迁移学习
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) Pub Date : 2022-03-28 DOI: 10.1109/ISBI52829.2022.9761628
Jie Zeng, Chongyang Cao, Xingchen Peng, Jianghong Xiao, C. Zu, Xi Wu, Jiliu Zhou, Yan Wang
{"title":"Two-Phase Progressive Deep Transfer Learning for Cervical Cancer Dose Map Prediction","authors":"Jie Zeng, Chongyang Cao, Xingchen Peng, Jianghong Xiao, C. Zu, Xi Wu, Jiliu Zhou, Yan Wang","doi":"10.1109/ISBI52829.2022.9761628","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761628","url":null,"abstract":"Recently, deep learning has enabled the automation of radiation therapy planning, improving its quality and efficiency. However, such progress comes at the cost of amounts of data. For some low incidence cancers, e.g., cervical cancer, the available data is limited, which could degrade the performance of conventional deep learning models. To alleviate this, in this paper, we resort to transfer learning to accomplish the task of dose prediction on a small amount of cervical cancer data. Considering the same scanning areas of the cervical cancer and the rectum cancer and their shared organs at risk, we are inspired to transfer the knowledge learned from rectum cancer (source domain) to cervical cancer (target domain). Specifically, to narrow the huge gap between the source domain and the target domain, we propose a two-phase transfer strategy. Firstly, we aggregate the data distributions of two domains by linear interpolation, and train an aggregated network to perceive the target domain in advance. Secondly, we transfer the knowledge from the well-trained aggregated network to the target network through an innovatively designed Weighted Feature Transfer Module (WFTM), thus ensuring that the target network can learn more valuable knowledge. Experimental results on 130 rectum cancer patients and 42 cervical cancer patients demonstrate the effectiveness of our method.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"21 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90510528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Semi-Supervised Tumor Response Grade Classification from Histology Images of Colorectal Liver Metastases 半监督肿瘤反应分级从组织学图像的结直肠肝转移
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) Pub Date : 2022-03-28 DOI: 10.1109/ISBI52829.2022.9761550
Mohamed El Amine Elforaici, E. Montagnon, F. Azzi, D. Trudel, Bich Nguyen, S. Turcotte, A. Tang, S. Kadoury
{"title":"Semi-Supervised Tumor Response Grade Classification from Histology Images of Colorectal Liver Metastases","authors":"Mohamed El Amine Elforaici, E. Montagnon, F. Azzi, D. Trudel, Bich Nguyen, S. Turcotte, A. Tang, S. Kadoury","doi":"10.1109/ISBI52829.2022.9761550","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761550","url":null,"abstract":"Colorectal liver metastases (CLM) develop in almost half of patients with colon cancer. Response to systemic chemotherapy is the main determinant of patient survival. Due to the importance of assessing treatment response of CLM to chemotherapy for the patient prognosis, there is a need to classify tumor response grade (TRG) on histopathology slides (HPS). However, annotating HPS for training neural networks is a time-consuming task. In this work, we present an end-to-end approach for tissue classification of CLM slides leading to TRG prediction. A weakly-supervised model is first trained to perform tissue classification from sparse annotations, generating segmentation maps. Then, using features extracted for these maps, a secondary model is trained to perform the TRG classification. We demonstrate the feasibility of the proposed approach on a clinical dataset of 1450 HPS from 232 CLM patients by comparing our semi-supervised Mean Teacher approach with other supervised and semi-supervised methods. The proposed pipeline outperforms other models, achieving a classification accuracy of 94.4%. Based on the generated classification maps, the model is able to stratify patients into two TRG classes (1-2 vs 3-5) with an accuracy of 86.2%.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"77 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85129486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信