Jiajun Li;Wenchao Du;Huanhuan Cui;Hu Chen;Yi Zhang;Hongyu Yang
{"title":"Progressively Prompt-Guided Models for Sparse-View CT Reconstruction","authors":"Jiajun Li;Wenchao Du;Huanhuan Cui;Hu Chen;Yi Zhang;Hongyu Yang","doi":"10.1109/TRPMS.2024.3512172","DOIUrl":null,"url":null,"abstract":"While sparse-view computed tomography (CT) remarkably reduces the ionizing radiation dose, the reconstructed images have been compromised by streak-like artifacts, affecting clinical diagnostics. The deep unrolled methods have achieved promising results by integrating powerful regularization terms with deep learning technologies into iterative reconstruction algorithms. However, leading works focus on designing powerful regularization term to capture image and noise priors, which always requires carefully designed blocks, and leads to heavy computational burden while bringing over-smoothness into results. In this article, we integrate the idea of prompt learning into the general regularization terms, and propose a progressively prompt-guided model (shorted by PPM) to alleviate above problems. More specifically, we inject a prompting module into each unrolled block to perceive more native priors in a self-adaptive manner, which would capture more effective image and noise priors to guide high-quality CT reconstruction. Furthermore, we propose a progressively guiding strategy to facilitate high-quality prompt generation while speeding model convergence. Extensive experiments on multiple sparse-view CT reconstruction benchmarks demonstrate that our PPM achieves state-of-the-art performance in terms of artifact reduction and structure preservation while with fewer parameters and higher-inference efficiency.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 4","pages":"447-459"},"PeriodicalIF":4.6000,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10778259","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10778259/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
While sparse-view computed tomography (CT) remarkably reduces the ionizing radiation dose, the reconstructed images have been compromised by streak-like artifacts, affecting clinical diagnostics. The deep unrolled methods have achieved promising results by integrating powerful regularization terms with deep learning technologies into iterative reconstruction algorithms. However, leading works focus on designing powerful regularization term to capture image and noise priors, which always requires carefully designed blocks, and leads to heavy computational burden while bringing over-smoothness into results. In this article, we integrate the idea of prompt learning into the general regularization terms, and propose a progressively prompt-guided model (shorted by PPM) to alleviate above problems. More specifically, we inject a prompting module into each unrolled block to perceive more native priors in a self-adaptive manner, which would capture more effective image and noise priors to guide high-quality CT reconstruction. Furthermore, we propose a progressively guiding strategy to facilitate high-quality prompt generation while speeding model convergence. Extensive experiments on multiple sparse-view CT reconstruction benchmarks demonstrate that our PPM achieves state-of-the-art performance in terms of artifact reduction and structure preservation while with fewer parameters and higher-inference efficiency.