{"title":"Ideal Observer Computation by Use of Markov-Chain Monte Carlo with Generative Adversarial Networks","authors":"Weimin Zhou, Umberto Villa, M. Anastasio","doi":"10.48550/arXiv.2304.00433","DOIUrl":"https://doi.org/10.48550/arXiv.2304.00433","url":null,"abstract":"Medical imaging systems are often evaluated and optimized via objective, or task-specific, measures of image quality (IQ) that quantify the performance of an observer on a specific clinically-relevant task. The performance of the Bayesian Ideal Observer (IO) sets an upper limit among all observers, numerical or human, and has been advocated for use as a figure-of-merit (FOM) for evaluating and optimizing medical imaging systems. However, the IO test statistic corresponds to the likelihood ratio that is intractable to compute in the majority of cases. A sampling-based method that employs Markov-Chain Monte Carlo (MCMC) techniques was previously proposed to estimate the IO performance. However, current applications of MCMC methods for IO approximation have been limited to a small number of situations where the considered distribution of to-be-imaged objects can be described by a relatively simple stochastic object model (SOM). As such, there remains an important need to extend the domain of applicability of MCMC methods to address a large variety of scenarios where IO-based assessments are needed but the associated SOMs have not been available. In this study, a novel MCMC method that employs a generative adversarial network (GAN)-based SOM, referred to as MCMC-GAN, is described and evaluated. The MCMC-GAN method was quantitatively validated by use of test-cases for which reference solutions were available. The results demonstrate that the MCMC-GAN method can extend the domain of applicability of MCMC methods for conducting IO analyses of medical imaging systems.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"PP 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41342898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Aligning Multi-Sequence CMR Towards Fully Automated Myocardial Pathology Segmentation","authors":"Wangbin Ding, Lei Li, Junyi Qiu, Sihan Wang, Liqin Huang, Yinyin Chen, Shan Yang, X. Zhuang","doi":"10.48550/arXiv.2302.03537","DOIUrl":"https://doi.org/10.48550/arXiv.2302.03537","url":null,"abstract":"Myocardial pathology segmentation (MyoPS) is critical for the risk stratification and treatment planning of myocardial infarction (MI). Multi-sequence cardiac magnetic resonance (MS-CMR) images can provide valuable information. For instance, balanced steady-state free precession cine sequences present clear anatomical boundaries, while late gadolinium enhancement and T2-weighted CMR sequences visualize myocardial scar and edema of MI, respectively. Existing methods usually fuse anatomical and pathological information from different CMR sequences for MyoPS, but assume that these images have been spatially aligned. However, MS-CMR images are usually unaligned due to the respiratory motions in clinical practices, which poses additional challenges for MyoPS. This work presents an automatic MyoPS framework for unaligned MS-CMR images. Specifically, we design a combined computing model for simultaneous image registration and information fusion, which aggregates multi-sequence features into a common space to extract anatomical structures (i.e., myocardium). Consequently, we can highlight the informative regions in the common space via the extracted myocardium to improve MyoPS performance, considering the spatial relationship between myocardial pathologies and myocardium. Experiments on a private MS-CMR dataset and a public dataset from the MYOPS2020 challenge show that our framework could achieve promising performance for fully automatic MyoPS.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45904269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coen de Vente, Koen A. Vermeer, Nicolas Jaccard, He Wang, Hongyi Sun, F. Khader, D. Truhn, Temirgali Aimyshev, Yerkebulan Zhanibekuly, Tien-Dung Le, A. Galdran, M. Ballester, G. Carneiro, G. DevikaR, S. HrishikeshP., Densen Puthussery, Hong Liu, Zekang Yang, Satoshi Kondo, S. Kasai, E. Wang, Ashritha Durvasula, J'onathan Heras, M. Zapata, Teresa Ara'ujo, Guilherme Aresta, Hrvoje Bogunovi'c, Mustafa Arikan, Y. Lee, Hyun Bin Cho, Y. Choi, Abdul Qayyum, Imran Razzak, B. Ginneken, H. Lemij, Clara I. S'anchez
{"title":"AIROGS: Artificial Intelligence for RObust Glaucoma Screening Challenge","authors":"Coen de Vente, Koen A. Vermeer, Nicolas Jaccard, He Wang, Hongyi Sun, F. Khader, D. Truhn, Temirgali Aimyshev, Yerkebulan Zhanibekuly, Tien-Dung Le, A. Galdran, M. Ballester, G. Carneiro, G. DevikaR, S. HrishikeshP., Densen Puthussery, Hong Liu, Zekang Yang, Satoshi Kondo, S. Kasai, E. Wang, Ashritha Durvasula, J'onathan Heras, M. Zapata, Teresa Ara'ujo, Guilherme Aresta, Hrvoje Bogunovi'c, Mustafa Arikan, Y. Lee, Hyun Bin Cho, Y. Choi, Abdul Qayyum, Imran Razzak, B. Ginneken, H. Lemij, Clara I. S'anchez","doi":"10.48550/arXiv.2302.01738","DOIUrl":"https://doi.org/10.48550/arXiv.2302.01738","url":null,"abstract":"The early detection of glaucoma is essential in preventing visual impairment. Artificial intelligence (AI) can be used to analyze color fundus photographs (CFPs) in a cost-effective manner, making glaucoma screening more accessible. While AI models for glaucoma screening from CFPs have shown promising results in laboratory settings, their performance decreases significantly in real-world scenarios due to the presence of out-of-distribution and low-quality images. To address this issue, we propose the Artificial Intelligence for Robust Glaucoma Screening (AIROGS) challenge. This challenge includes a large dataset of around 113,000 images from about 60,000 patients and 500 different screening centers, and encourages the development of algorithms that are robust to ungradable and unexpected input data. We evaluated solutions from 14 teams in this paper and found that the best teams performed similarly to a set of 20 expert ophthalmologists and optometrists. The highest-scoring team achieved an area under the receiver operating characteristic curve of 0.99 (95% CI: 0.98-0.99) for detecting ungradable images on-the-fly. Additionally, many of the algorithms showed robust performance when tested on three other publicly available datasets. These results demonstrate the feasibility of robust AI-enabled glaucoma screening.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43317374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alper Gungor, Baris Askin, D. Soydan, Can Barics Top, E. Saritas, Tolga cCukur
{"title":"DEQ-MPI: A Deep Equilibrium Reconstruction with Learned Consistency for Magnetic Particle Imaging.","authors":"Alper Gungor, Baris Askin, D. Soydan, Can Barics Top, E. Saritas, Tolga cCukur","doi":"10.1109/TMI.2023.3300704.","DOIUrl":"https://doi.org/10.1109/TMI.2023.3300704.","url":null,"abstract":"Magnetic particle imaging (MPI) offers unparalleled contrast and resolution for tracing magnetic nanoparticles. A common imaging procedure calibrates a system matrix (SM) that is used to reconstruct data from subsequent scans. The ill-posed reconstruction problem can be solved by simultaneously enforcing data consistency based on the SM and regularizing the solution based on an image prior. Traditional hand-crafted priors cannot capture the complex attributes of MPI images, whereas recent MPI methods based on learned priors can suffer from extensive inference times or limited generalization performance. Here, we introduce a novel physics-driven method for MPI reconstruction based on a deep equilibrium model with learned data consistency (DEQ-MPI). DEQ-MPI reconstructs images by augmenting neural networks into an iterative optimization, as inspired by unrolling methods in deep learning. Yet, conventional unrolling methods are computationally restricted to few iterations resulting in non-convergent solutions, and they use hand-crafted consistency measures that can yield suboptimal capture of the data distribution. DEQ-MPI instead trains an implicit mapping to maximize the quality of a convergent solution, and it incorporates a learned consistency measure to better account for the data distribution. Demonstrations on simulated and experimental data indicate that DEQ-MPI achieves superior image quality and competitive inference time to state-of-the-art MPI reconstruction methods.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2022-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45398959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stable deep MRI reconstruction using Generative Priors","authors":"Martin Zach, F. Knoll, T. Pock","doi":"10.48550/arXiv.2210.13834","DOIUrl":"https://doi.org/10.48550/arXiv.2210.13834","url":null,"abstract":"Data-driven approaches recently achieved remarkable success in magnetic resonance imaging (MRI) reconstruction, but integration into clinical routine remains challenging due to a lack of generalizability and interpretability. In this paper, we address these challenges in a unified framework based on generative image priors. We propose a novel deep neural network based regularizer which is trained in a generative setting on reference magnitude images only. After training, the regularizer encodes higher-level domain statistics which we demonstrate by synthesizing images without data. Embedding the trained model in a classical variational approach yields high-quality reconstructions irrespective of the sub-sampling pattern. In addition, the model shows stable behavior when confronted with out-of-distribution data in the form of contrast variation. Furthermore, a probabilistic interpretation provides a distribution of reconstructions and hence allows uncertainty quantification. To reconstruct parallel MRI, we propose a fast algorithm to jointly estimate the image and the sensitivity maps. The results demonstrate competitive performance, on par with state-of-the-art end-to-end deep learning methods, while preserving the flexibility with respect to sub-sampling patterns and allowing for uncertainty quantification.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45098550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Jin, Shi Gu, D. Wei, Kaiming Kuang, H. Pfister, Bingbing Ni, Jiancheng Yang, Ming Li
{"title":"RibSeg v2: A Large-scale Benchmark for Rib Labeling and Anatomical Centerline Extraction","authors":"L. Jin, Shi Gu, D. Wei, Kaiming Kuang, H. Pfister, Bingbing Ni, Jiancheng Yang, Ming Li","doi":"10.48550/arXiv.2210.09309","DOIUrl":"https://doi.org/10.48550/arXiv.2210.09309","url":null,"abstract":"Automatic rib labeling and anatomical centerline extraction are common prerequisites for various clinical applications. Prior studies either use in-house datasets that are inaccessible to communities, or focus on rib segmentation that neglects the clinical significance of rib labeling. To address these issues, we extend our prior dataset (RibSeg) on the binary rib segmentation task to a comprehensive benchmark, named RibSeg v2, with 660 CT scans (15,466 individual ribs in total) and annotations manually inspected by experts for rib labeling and anatomical centerline extraction. Based on the RibSeg v2, we develop a pipeline including deep learning-based methods for rib labeling, and a skeletonization-based method for centerline extraction. To improve computational efficiency, we propose a sparse point cloud representation of CT scans and compare it with standard dense voxel grids. Moreover, we design and analyze evaluation metrics to address the key challenges of each task. Our dataset, code, and model are available online to facilitate open research at https://github.com/M3DV/RibSeg.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46077902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongkuan Shi, Zhiwei Wang, Ying Zhou, Dun Li, Xin Yang, Qiang Li
{"title":"Bidirectional Semi-supervised Dual-branch CNN for Robust 3D Reconstruction of Stereo Endoscopic Images via Adaptive Cross and Parallel Supervisions","authors":"Hongkuan Shi, Zhiwei Wang, Ying Zhou, Dun Li, Xin Yang, Qiang Li","doi":"10.48550/arXiv.2210.08291","DOIUrl":"https://doi.org/10.48550/arXiv.2210.08291","url":null,"abstract":"Semi-supervised learning via teacher-student network can train a model effectively on a few labeled samples. It enables a student model to distill knowledge from the teacher's predictions of extra unlabeled data. However, such knowledge flow is typically unidirectional, having the accuracy vulnerable to the quality of teacher model. In this paper, we seek to robust 3D reconstruction of stereo endoscopic images by proposing a novel fashion of bidirectional learning between two learners, each of which can play both roles of teacher and student concurrently. Specifically, we introduce two self-supervisions, i.e., Adaptive Cross Supervision (ACS) and Adaptive Parallel Supervision (APS), to learn a dual-branch convolutional neural network. The two branches predict two different disparity probability distributions for the same position, and output their expectations as disparity values. The learned knowledge flows across branches along two directions: a cross direction (disparity guides distribution in ACS) and a parallel direction (disparity guides disparity in APS). Moreover, each branch also learns confidences to dynamically refine its provided supervisions. In ACS, the predicted disparity is softened into a unimodal distribution, and the lower the confidence, the smoother the distribution. In APS, the incorrect predictions are suppressed by lowering the weights of those with low confidence. With the adaptive bidirectional learning, the two branches enjoy well-tuned mutual supervisions, and eventually converge on a consistent and more accurate disparity estimation. The experimental results on four public datasets demonstrate our superior accuracy over other state-of-the-arts with a relative decrease of averaged disparity error by at least 9.76%.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2022-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45789439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Dual-Attention Learning Network with Word and Sentence Embedding for Medical Visual Question Answering","authors":"Xiaofei Huang, Hongfang Gong","doi":"10.48550/arXiv.2210.00220","DOIUrl":"https://doi.org/10.48550/arXiv.2210.00220","url":null,"abstract":"Research in medical visual question answering (MVQA) can contribute to the development of computer-aided diagnosis. MVQA is a task that aims to predict accurate and convincing answers based on given medical images and associated natural language questions. This task requires extracting medical knowledge-rich feature content and making fine-grained understandings of them. Therefore, constructing an effective feature extraction and understanding scheme are keys to modeling. Existing MVQA question extraction schemes mainly focus on word information, ignoring medical information in the text, such as medical concepts and domain-specific terms. Meanwhile, some visual and textual feature understanding schemes cannot effectively capture the correlation between regions and keywords for reasonable visual reasoning. In this study, a dual-attention learning network with word and sentence embedding (DALNet-WSE) is proposed. We design a module, transformer with sentence embedding (TSE), to extract a double embedding representation of questions containing keywords and medical information. A dual-attention learning (DAL) module consisting of self-attention and guided attention is proposed to model intensive intramodal and intermodal interactions. With multiple DAL modules (DALs), learning visual and textual co-attention can increase the granularity of understanding and improve visual reasoning. Experimental results on the ImageCLEF 2019 VQA-MED (VQA-MED 2019) and VQA-RAD datasets demonstrate that our proposed method outperforms previous state-of-the-art methods. According to the ablation studies and Grad-CAM maps, DALNet-WSE can extract rich textual information and has strong visual reasoning ability.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"PP 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42795071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muzaffer Ozbey, S. Dar, H. A. Bedel, Onat Dalmaz, cSaban Ozturk, Alper Gungor, Tolga cCukur
{"title":"Unsupervised Medical Image Translation with Adversarial Diffusion Models","authors":"Muzaffer Ozbey, S. Dar, H. A. Bedel, Onat Dalmaz, cSaban Ozturk, Alper Gungor, Tolga cCukur","doi":"10.48550/arXiv.2207.08208","DOIUrl":"https://doi.org/10.48550/arXiv.2207.08208","url":null,"abstract":"Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45407622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zihan Li, Yunxiang Li, Qingde Li, You Zhang, Puyang Wang, Dazhou Guo, Le Lu, D. Jin, Qingqi Hong
{"title":"LViT: Language meets Vision Transformer in Medical Image Segmentation","authors":"Zihan Li, Yunxiang Li, Qingde Li, You Zhang, Puyang Wang, Dazhou Guo, Le Lu, D. Jin, Qingqi Hong","doi":"10.48550/arXiv.2206.14718","DOIUrl":"https://doi.org/10.48550/arXiv.2206.14718","url":null,"abstract":"Deep learning has been widely used in medical image segmentation and other aspects. However, the performance of existing medical image segmentation models has been limited by the challenge of obtaining sufficient high-quality labeled data due to the prohibitive data annotation cost. To alleviate this limitation, we propose a new text-augmented medical image segmentation model LViT (Language meets Vision Transformer). In our LViT model, medical text annotation is incorporated to compensate for the quality deficiency in image data. In addition, the text information can guide to generate pseudo labels of improved quality in the semi-supervised learning. We also propose an Exponential Pseudo label Iteration mechanism (EPI) to help the Pixel-Level Attention Module (PLAM) preserve local image features in semi-supervised LViT setting. In our model, LV (Language-Vision) loss is designed to supervise the training of unlabeled images using text information directly. For evaluation, we construct three multimodal medical segmentation datasets (image + text) containing X-rays and CT images. Experimental results show that our proposed LViT has superior segmentation performance in both fully-supervised and semi-supervised setting. The code and datasets are available at https://github.com/HUANGLIZI/LViT.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44585888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}