Zeeshan Nisar, Jelica Vasiljević, P. Gançarski, T. Lampert
{"title":"Towards Measuring Domain Shift in Histopathological Stain Translation in an Unsupervised Manner","authors":"Zeeshan Nisar, Jelica Vasiljević, P. Gançarski, T. Lampert","doi":"10.1109/ISBI52829.2022.9761411","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761411","url":null,"abstract":"Domain shift in digital histopathology can occur when different stains or scanners are used, during stain translation, etc. A deep neural network trained on source data may not generalise well to data that has undergone some domain shift. An important step towards being robust to domain shift is the ability to detect and measure it. This article demonstrates that the PixelCNN and domain shift metric can be used to detect and quantify domain shift in digital histopathology, and they demonstrate a strong correlation with generalisation performance. These findings pave the way for a mechanism to infer the average performance of a model (trained on source data) on unseen and unlabelled target data.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"46 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86801277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semi-Supervised Pseudo-Healthy Image Synthesis via Confidence Augmentation","authors":"Yuanqi Du, Quan Quan, Hu Han, S. K. Zhou","doi":"10.1109/ISBI52829.2022.9761522","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761522","url":null,"abstract":"Pseudo-healthy image synthesis, which computationally synthesizes a pathology-free image from a pathological one, has been proved valuable in many downstream medical image analysis tasks, from lesion detection, data augmentation to clinical surgery suggestion. Thanks to the advancement of generative adversarial networks (GANs), recent studies have made steady progress to synthesize realistic-looking pseudohealthy images with the perseverance of the structure identity as well as the healthy-looking appearance. Nevertheless, it is challenging to generate high-quality pseudo-healthy images in the absence of the lesion segmentation mask. In this paper, we aim to alleviate the needs of a large amount of lesion segmentation labeled data when synthesizing pseudo-healthy images. We propose a semi-supervised pseudo-healthy image synthesis framework which leverages unlabeled pathological image data for efficient pseudo-healthy image synthesis based on a novel confidence augmentation trick. Furthermore, we re-design the network architecture which takes advantage of previous studies and allows for more flexible applications. Extensive experiments have demonstrated the effectiveness of the proposed method in generating realistic-looking pseudo-healthy images and improving downstream task performances.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"12 1 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85263434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haofeng Liu, Heng Li, Mingyang Ou, Yitian Zhao, H. Qi, Yan Hu, Jiang Liu
{"title":"Domain Generalization in Restoration of Cataract Fundus Images Via High-Frequency Components","authors":"Haofeng Liu, Heng Li, Mingyang Ou, Yitian Zhao, H. Qi, Yan Hu, Jiang Liu","doi":"10.1109/ISBI52829.2022.9761606","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761606","url":null,"abstract":"Cataracts are the most common blinding disease, and also impact the observation of the fundus. To boost the fundus examination of cataract patients, restoration algorithms have been proposed to address the degradation of fundus images caused by cataracts. However, it is impractical in clinics to collect paired or annotated fundus images for developing restoration models. In this paper, a restoration algorithm is designed for cataractous images without paired or annotated data. Domain generalization (DG) is applied to learn domain-invariant features (DIFs) from synthesized data, and the high-frequency components (HFCs) are extracted to conduct domain alignment. The proposed algorithm is used on unseen target data in the experiments. The effectiveness of the algorithm is demonstrated in the ablation study and compared with state-of-the-art methods. The code of this paper will be released at https://github.com/HeverLaw/Restoration-of-Cataract-Images-via-Domain-Generalization.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"3 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79654327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LA-Net: Lung Adenocarcinoma Classification with Assistants from Lung Nodule Classification and Positional Information","authors":"Mancheng Meng, Mianxin Liu, Xianjie Zhang, Yuxuan Liu, Xiran Cai, Yaozong Gao, Xiaoping Zhou, D. Shen","doi":"10.1109/ISBI52829.2022.9761450","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761450","url":null,"abstract":"Lung cancer (LC) is one of the most common and fatal cancer in human. Recent LC diagnostic studies focus on using convolutional neural networks (CNNs) and achieve certain success. However, there are at least two limitations with current studies: 1) Well-labeled lung adenocarcinoma (LA, a subtype of LC) data are rare, leading to limited samples for training CNNs; 2) The conventional CNNs ignore positional information by pooling operations, whereas the positional information is of great importance in clinical diagnosis for LA. Here, we propose the \"LA-Net\" to address these issues by the following steps. First, we consider a transfer learning with pre-trained model based on the lung nodule (LN) classification, where the training data are richer, to assist the LA diagnosis. In addition, self-attention mechanisms are introduced to properly extract features from source dataset (LN) and to refine combined features from source and target sets for the LA classification. Moreover, we augment the CNN by another self-attention mechanism on the content and positional information. Our model has achieved 83.82% accuracy and 90.65% area under the receiver operating curve (AUC) on the LA classification task with 725 subjects, and outperforms the state-of-the-art methods. Our study supports the potential future clinical application of our method on LA diagnosis, and also suggests the importance of including domain knowledge in the design of neural networks.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"12 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90950010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Improved Deep Learning Framework for MR-to-CT Image Synthesis with a New Hybrid Objective Function","authors":"Sui Paul Ang, S. L. Phung, M. Field, M. Schira","doi":"10.1109/ISBI52829.2022.9761546","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761546","url":null,"abstract":"There is an emerging interest in radiotherapy treatment planning that uses only magnetic resonance (MR) imaging. Cur-rent clinical workflows rely on computed tomography (CT) images for dose calculation and patient positioning, therefore synthetic CT images need to be derived from MR images. Re-cent efforts for MR-to-CT image synthesis have focused on unsupervised training for ease of data preparation. However, accuracy is more important than convenience. In this paper, we propose a deep learning framework for MR-to-CT image synthesis that is trained in a supervised manner. The pro-posed framework utilizes a new hybrid objective function to enforce visual realism, accurate electron density information, and structural consistency between the MR and CT image domains. Our experiments show that the proposed method (MAE of 68.22, PSNR of 22.28, and FID of 0.73) outperforms the existing unsupervised and supervised techniques in both quantitative and qualitative comparisons.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"1 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91335617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Left Main Bifurcation Shape Features with an Autoencoder","authors":"Nanway Chen, R. Gharleghi, A. Sowmya, S. Beier","doi":"10.1109/ISBI52829.2022.9761591","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761591","url":null,"abstract":"Geometric characteristics of the coronary arteries have been suggested as potential markers for disease risk. However, evaluation of such characteristics rely on judgement by human experts, and are thus variable and may lack sophistication. Here we apply recent advances in 3D deep learning to automatically obtain shape representation of the Left Main Bifurcation (LMB) of the coronary artery. We train a Variational Auto-Encoder based on the FoldingNet architecture to encode LMB shape features in a 450-dimension feature vector. The geometric features of patient-specific LMBs can then be manipulated by modifying, combining or interpolating the feature vectors before decoding. We also show that these vectors, on average, perform better than hand-crafted features in predicting measures of adverse blood flow (oscillating shear index or ‘OSI’, relative residence time ‘RRT’ and time averaged wall shear stress ‘TAWSS’) with a R2 goodness of fit value of 84.1% compared to 79.7%. These learned representations can also be used in other downstream predictive modelling tasks where an encoded version of a LMB is needed.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"62 3 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89813834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic-Aware Temporal Channel-Wise Attention for Cardiac Function Assessment","authors":"Guanqi Chen, Guanbin Li","doi":"10.1109/ISBI52829.2022.9761481","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761481","url":null,"abstract":"Cardiac function assessment aims at predicting left ventricular ejection fraction (LVEF) given an echocardiogram video, which requests models to focus on the changes in the left ventricle during the cardiac cycle. How to assess cardiac function accurately and automatically from an echocardiogram video is a valuable topic in intelligent assisted healthcare. Existing video-based methods do not pay much attention to the left ventricular region, nor the left ventricular changes caused by motion. In this work, we propose a semi-supervised auxiliary learning paradigm with a left ventricular segmentation task, which contributes to the representation learning for the left ventricular region. To better model the importance of motion information, we introduce a temporal channel-wise attention (TCA) module to excite those channels used to describe motion. Furthermore, we reform the TCA module with semantic perception by taking the segmentation map of the left ventricle as input to focus on the motion patterns of the left ventricle. Finally, to reduce the difficulty of direct LVEF regression, we utilize an anchor-based classification and regression method to predict LVEF. Our approach achieves state-of-the-art performance on the Stanford dataset with an improvement of 0.22 MAE, 0.26 RMSE, and 1.9% R2.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"1 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90522228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-View Fusion Convolutional Neural Network for Automatic Landmark Location on Spinal X-Rays","authors":"Kailai Zhang, Nanfang Xu, Ji Wu","doi":"10.1109/ISBI52829.2022.9761439","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761439","url":null,"abstract":"In clinical practice, landmark location plays an important role in spine deformity assessment, which is the foundation for measurement of several spinal morphological parameters. The clinicians usually use both anterior-posterior(AP) view X-rays and lateral(LAT) view X-rays of the same patients for diagnosis. However, for automatic landmark location, the information between multi-view X-rays is seldom considered. Addressing this problem, in this paper, we propose a multi-view fusion convolutional neural network for automatic landmark location on AP X-rays and LAT X-rays simultaneously. Based on an object detection framework, for two channels representing multi-view X-rays, we first share their network parameters in convolutional backbone, and then we design an image-level fusion module and an object-level fusion module respectively, which can combine the information of both channels. Finally we insert a landmark prediction branch to the end of each channel for landmark location. The experiment results show that our proposed method achieves more accurate vertebra detection and more precise landmark location than predicting them separately, which can provide reliable assistance for clinicians.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"1 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75182755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Band Selective Volterra Filter for Nonlinear Ultrasound Imaging","authors":"Abhishek Sahoo, E. Ebbini","doi":"10.1109/ISBI52829.2022.9761516","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761516","url":null,"abstract":"Nonlinear ultrasound imaging methods such as tissue harmonic imaging (THI) and pulse inversion (PI) have improved image quality, but have shortcomings that reduce their specificity. These limitations stem from loss of axial resolutions due to bandwidth restrictions, sensitivity to motion artifacts due to multiple transmissions per image line, loss of dynamic range due to echo subtraction etc. Several variants of these methods to overcome their limitations are being investigated by research groups worldwide. In this paper, we have developed a band selective quadratic Volterra filter capable of extracting nonlinear echo components throughout the entire bandwidth for improved spatial and contrast resolution. The performance of the proposed method is compared with the existing imaging techniques such as PI and truncated SVD (TSVD) quadratic kernel design method. Imaging results from a quality assurance phantom without ultrasound contrast agents (UCA) as well as an in vivo porcine kidney data with UCA are used to demonstrate applicability of the proposed Volterra filter as a nonlinear echo separation model irrespective of the source of nonlinearity. The new band selective kernel design method is shown to enhance contrast and lateral resolution while preserving axial resolution without the need for multiple transmission per A-line.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"47 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75282455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Jiang, Yongsheng Pan, Zhiming Cui, Dinggang Shen
{"title":"Reconstruction of Standard-Dose Pet From Low-Dose Pet Via Dual-Frequency Supervision and Global Aggregation Module","authors":"C. Jiang, Yongsheng Pan, Zhiming Cui, Dinggang Shen","doi":"10.1109/ISBI52829.2022.9761694","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761694","url":null,"abstract":"Positron emission tomography (PET) imaging is a widely used technology in clinics. To meet the clinical diagnostic requirement, standard-dose tracers with radioactivity need to be injected into the body when acquiring PET images. To reduce imaging radiation hazard while maintain PET image quality, in this paper, we propose a novel and effective approach to reconstruct standard-dose PET (SPET) images from low-dose PET (LPET) images. Specifically, we first design a two-branch network to preserve richer high-frequency details by reconstructing low-frequency component and high-frequency component separately. Then, we design a global integration module to integrate global information for achieving better quality in those difficult-to-reconstruct regions. Validation on a real clinical dataset suggests that our approach outperforms previous methods both qualitatively and quantitatively.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"48 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84954259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}