Shu Li;Yi Liu;Rongbiao Yan;Haowen Zhang;Shubin Wang;Ting Ding;Zhiguo Gui
{"title":"DD-DCSR:通过双字典深度卷积稀疏表示为低剂量 CT 进行图像去噪","authors":"Shu Li;Yi Liu;Rongbiao Yan;Haowen Zhang;Shubin Wang;Ting Ding;Zhiguo Gui","doi":"10.1109/TCI.2024.3408091","DOIUrl":null,"url":null,"abstract":"Most of the existing low-dose computed tomography (LDCT) denoising algorithms, based on convolutional neural networks, are not interpretable enough due to a lack of mathematical basis. In the process of image denoising, the sparse representation based on a single dictionary cannot restore the texture details of the image perfectly. To solve these problems, we propose a Dual-Dictionary Convolutional Sparse Representation (DD-CSR) method and construct a Dual-Dictionary Deep Convolutional Sparse Representation network (DD-DCSR) to unfold the model iteratively. The modules in the network correspond to the model one by one. In the proposed DD-CSR, the high-frequency information is extracted by Local Total Variation (LTV), and then two different learnable convolutional dictionaries are used to sparsely represent the LDCT image and its high-frequency map. To improve the robustness of the model, the adaptive coefficient is introduced into the convolutional dictionary of LDCT images, which allows the image to be represented by fewer convolutional dictionary atoms and reduces the number of parameters of the model. Considering that the sparse degree of convolutional sparse feature maps is closely related to noise, the model introduces learnable weight coefficients into the penalty items of processing LDCT high-frequency maps. The experimental results show that the interpretable DD-DCSR network can well restore the texture details of the image when removing noise/artifacts.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"899-914"},"PeriodicalIF":4.2000,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DD-DCSR: Image Denoising for Low-Dose CT via Dual-Dictionary Deep Convolutional Sparse Representation\",\"authors\":\"Shu Li;Yi Liu;Rongbiao Yan;Haowen Zhang;Shubin Wang;Ting Ding;Zhiguo Gui\",\"doi\":\"10.1109/TCI.2024.3408091\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most of the existing low-dose computed tomography (LDCT) denoising algorithms, based on convolutional neural networks, are not interpretable enough due to a lack of mathematical basis. In the process of image denoising, the sparse representation based on a single dictionary cannot restore the texture details of the image perfectly. To solve these problems, we propose a Dual-Dictionary Convolutional Sparse Representation (DD-CSR) method and construct a Dual-Dictionary Deep Convolutional Sparse Representation network (DD-DCSR) to unfold the model iteratively. The modules in the network correspond to the model one by one. In the proposed DD-CSR, the high-frequency information is extracted by Local Total Variation (LTV), and then two different learnable convolutional dictionaries are used to sparsely represent the LDCT image and its high-frequency map. To improve the robustness of the model, the adaptive coefficient is introduced into the convolutional dictionary of LDCT images, which allows the image to be represented by fewer convolutional dictionary atoms and reduces the number of parameters of the model. Considering that the sparse degree of convolutional sparse feature maps is closely related to noise, the model introduces learnable weight coefficients into the penalty items of processing LDCT high-frequency maps. The experimental results show that the interpretable DD-DCSR network can well restore the texture details of the image when removing noise/artifacts.\",\"PeriodicalId\":56022,\"journal\":{\"name\":\"IEEE Transactions on Computational Imaging\",\"volume\":\"10 \",\"pages\":\"899-914\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-03-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computational Imaging\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10543110/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Imaging","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10543110/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
DD-DCSR: Image Denoising for Low-Dose CT via Dual-Dictionary Deep Convolutional Sparse Representation
Most of the existing low-dose computed tomography (LDCT) denoising algorithms, based on convolutional neural networks, are not interpretable enough due to a lack of mathematical basis. In the process of image denoising, the sparse representation based on a single dictionary cannot restore the texture details of the image perfectly. To solve these problems, we propose a Dual-Dictionary Convolutional Sparse Representation (DD-CSR) method and construct a Dual-Dictionary Deep Convolutional Sparse Representation network (DD-DCSR) to unfold the model iteratively. The modules in the network correspond to the model one by one. In the proposed DD-CSR, the high-frequency information is extracted by Local Total Variation (LTV), and then two different learnable convolutional dictionaries are used to sparsely represent the LDCT image and its high-frequency map. To improve the robustness of the model, the adaptive coefficient is introduced into the convolutional dictionary of LDCT images, which allows the image to be represented by fewer convolutional dictionary atoms and reduces the number of parameters of the model. Considering that the sparse degree of convolutional sparse feature maps is closely related to noise, the model introduces learnable weight coefficients into the penalty items of processing LDCT high-frequency maps. The experimental results show that the interpretable DD-DCSR network can well restore the texture details of the image when removing noise/artifacts.
期刊介绍:
The IEEE Transactions on Computational Imaging will publish articles where computation plays an integral role in the image formation process. Papers will cover all areas of computational imaging ranging from fundamental theoretical methods to the latest innovative computational imaging system designs. Topics of interest will include advanced algorithms and mathematical techniques, model-based data inversion, methods for image and signal recovery from sparse and incomplete data, techniques for non-traditional sensing of image data, methods for dynamic information acquisition and extraction from imaging sensors, software and hardware for efficient computation in imaging systems, and highly novel imaging system design.