Yunyi Li , Huijuan Wu , Zhengdan Li , Weihao Dai , Chen Ye
{"title":"Flex-DD:用于稀疏视图CT重建的柔性先验深度去噪模型","authors":"Yunyi Li , Huijuan Wu , Zhengdan Li , Weihao Dai , Chen Ye","doi":"10.1016/j.measurement.2025.118121","DOIUrl":null,"url":null,"abstract":"<div><div>Sparse-view Computed Tomography (SVCT) can effectively reduce radiation risk and improve scan-imaging speed. However, the severe streak artifacts will degrade the reconstruction results. Traditional iterative reconstruction methods rely on appropriate prior knowledge for achieving satisfactory results, while supervised deep learning techniques require large-scale paired training data that is challenging in practical CT application. In this paper, we propose a novel deep denoising model for SVCT reconstruction, which can jointly exploit deep prior and flexible hand-crafted prior. Specifically, we develop an ADMM algorithm for the optimization of Flex-DD. Moreover, we introduce a novel mechanism for flexible incorporation of Flex-DD model into the SVCT reconstruction task via HQS optimization framework, which significantly improves the reconstruction performance with good convergence. Extensive experiments on both simulated ellipses images and human CT images have demonstrated that our proposed method can achieves promising results in both qualitative and visual evaluations compared to popular state-of-the-art methods.</div></div>","PeriodicalId":18349,"journal":{"name":"Measurement","volume":"256 ","pages":"Article 118121"},"PeriodicalIF":5.2000,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Flex-DD: Deep denoising model with flexible priors for sparse-view CT reconstruction\",\"authors\":\"Yunyi Li , Huijuan Wu , Zhengdan Li , Weihao Dai , Chen Ye\",\"doi\":\"10.1016/j.measurement.2025.118121\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Sparse-view Computed Tomography (SVCT) can effectively reduce radiation risk and improve scan-imaging speed. However, the severe streak artifacts will degrade the reconstruction results. Traditional iterative reconstruction methods rely on appropriate prior knowledge for achieving satisfactory results, while supervised deep learning techniques require large-scale paired training data that is challenging in practical CT application. In this paper, we propose a novel deep denoising model for SVCT reconstruction, which can jointly exploit deep prior and flexible hand-crafted prior. Specifically, we develop an ADMM algorithm for the optimization of Flex-DD. Moreover, we introduce a novel mechanism for flexible incorporation of Flex-DD model into the SVCT reconstruction task via HQS optimization framework, which significantly improves the reconstruction performance with good convergence. Extensive experiments on both simulated ellipses images and human CT images have demonstrated that our proposed method can achieves promising results in both qualitative and visual evaluations compared to popular state-of-the-art methods.</div></div>\",\"PeriodicalId\":18349,\"journal\":{\"name\":\"Measurement\",\"volume\":\"256 \",\"pages\":\"Article 118121\"},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2025-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Measurement\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0263224125014800\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Measurement","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0263224125014800","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
Flex-DD: Deep denoising model with flexible priors for sparse-view CT reconstruction
Sparse-view Computed Tomography (SVCT) can effectively reduce radiation risk and improve scan-imaging speed. However, the severe streak artifacts will degrade the reconstruction results. Traditional iterative reconstruction methods rely on appropriate prior knowledge for achieving satisfactory results, while supervised deep learning techniques require large-scale paired training data that is challenging in practical CT application. In this paper, we propose a novel deep denoising model for SVCT reconstruction, which can jointly exploit deep prior and flexible hand-crafted prior. Specifically, we develop an ADMM algorithm for the optimization of Flex-DD. Moreover, we introduce a novel mechanism for flexible incorporation of Flex-DD model into the SVCT reconstruction task via HQS optimization framework, which significantly improves the reconstruction performance with good convergence. Extensive experiments on both simulated ellipses images and human CT images have demonstrated that our proposed method can achieves promising results in both qualitative and visual evaluations compared to popular state-of-the-art methods.
期刊介绍:
Contributions are invited on novel achievements in all fields of measurement and instrumentation science and technology. Authors are encouraged to submit novel material, whose ultimate goal is an advancement in the state of the art of: measurement and metrology fundamentals, sensors, measurement instruments, measurement and estimation techniques, measurement data processing and fusion algorithms, evaluation procedures and methodologies for plants and industrial processes, performance analysis of systems, processes and algorithms, mathematical models for measurement-oriented purposes, distributed measurement systems in a connected world.