{"title":"基于改进的边缘感知生成对抗网络的跨模态 PET 合成方法","authors":"Liting Lei, Rui Zhang, Haifei Zhang, Xiujing Li, Yuchao Zou, Saad Aldosary, Azza S. Hassanein","doi":"10.1166/jno.2023.3502","DOIUrl":null,"url":null,"abstract":"Current cross-modal synthesis techniques for medical imaging have limits in their ability to accurately capture the structural information of human tissue, leading to problems such edge information loss and poor signal-to-noise ratio in the generated images. In order to synthesize PET pictures from Magnetic Resonance (MR) images, a novel approach for cross-modal synthesis of medical images is thus suggested. The foundation of this approach is an enhanced Edge-aware Generative Adversarial Network (Ea-GAN), which integrates an edge detector into the GAN framework to better capture local texture and edge information in the pictures. The Convolutional Block Attention Module (CBAM) is added in the generator portion of the GAN to prioritize important characteristics in the pictures. In order to improve the Ea-GAN discriminator, its receptive field is shrunk to concentrate more on the tiny features of brain tissue in the pictures, boosting the generator’s performance. The edge loss between actual PET pictures and synthetic PET images is also included into the algorithm’s loss function, further enhancing the generator’s performance. The suggested PET image synthesis algorithm, which is based on the enhanced Ea-GAN, outperforms different current approaches in terms of both quantitative and qualitative assessments, according to experimental findings. The architecture of the brain tissue are effectively preserved in the synthetic PET pictures, which also aesthetically nearly resemble genuine images.","PeriodicalId":16446,"journal":{"name":"Journal of Nanoelectronics and Optoelectronics","volume":"16 1","pages":""},"PeriodicalIF":0.6000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-Modal PET Synthesis Method Based on Improved Edge-Aware Generative Adversarial Network\",\"authors\":\"Liting Lei, Rui Zhang, Haifei Zhang, Xiujing Li, Yuchao Zou, Saad Aldosary, Azza S. Hassanein\",\"doi\":\"10.1166/jno.2023.3502\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Current cross-modal synthesis techniques for medical imaging have limits in their ability to accurately capture the structural information of human tissue, leading to problems such edge information loss and poor signal-to-noise ratio in the generated images. In order to synthesize PET pictures from Magnetic Resonance (MR) images, a novel approach for cross-modal synthesis of medical images is thus suggested. The foundation of this approach is an enhanced Edge-aware Generative Adversarial Network (Ea-GAN), which integrates an edge detector into the GAN framework to better capture local texture and edge information in the pictures. The Convolutional Block Attention Module (CBAM) is added in the generator portion of the GAN to prioritize important characteristics in the pictures. In order to improve the Ea-GAN discriminator, its receptive field is shrunk to concentrate more on the tiny features of brain tissue in the pictures, boosting the generator’s performance. The edge loss between actual PET pictures and synthetic PET images is also included into the algorithm’s loss function, further enhancing the generator’s performance. The suggested PET image synthesis algorithm, which is based on the enhanced Ea-GAN, outperforms different current approaches in terms of both quantitative and qualitative assessments, according to experimental findings. The architecture of the brain tissue are effectively preserved in the synthetic PET pictures, which also aesthetically nearly resemble genuine images.\",\"PeriodicalId\":16446,\"journal\":{\"name\":\"Journal of Nanoelectronics and Optoelectronics\",\"volume\":\"16 1\",\"pages\":\"\"},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2023-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Nanoelectronics and Optoelectronics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1166/jno.2023.3502\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Nanoelectronics and Optoelectronics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1166/jno.2023.3502","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
摘要
目前用于医学成像的跨模态合成技术在准确捕捉人体组织结构信息方面存在局限性,导致生成的图像存在边缘信息丢失和信噪比差等问题。为了从磁共振(MR)图像合成 PET 图像,我们提出了一种新的医学图像跨模态合成方法。这种方法的基础是增强型边缘感知生成对抗网络(Ea-GAN),它将边缘检测器集成到 GAN 框架中,以更好地捕捉图片中的局部纹理和边缘信息。卷积块注意模块(CBAM)被添加到 GAN 的生成器部分,以优先处理图片中的重要特征。为了改进 Ea-GAN 识别器,缩小了它的感受野,使其更集中于图片中脑组织的微小特征,从而提高了生成器的性能。算法的损失函数还包括实际 PET 图像与合成 PET 图像之间的边缘损失,进一步提高了生成器的性能。实验结果表明,基于增强型 Ea-GAN 的 PET 图像合成算法在定量和定性评估方面均优于现有的各种方法。合成的 PET 图像有效地保留了脑组织的结构,在美学上也几乎与真实图像相似。
Cross-Modal PET Synthesis Method Based on Improved Edge-Aware Generative Adversarial Network
Current cross-modal synthesis techniques for medical imaging have limits in their ability to accurately capture the structural information of human tissue, leading to problems such edge information loss and poor signal-to-noise ratio in the generated images. In order to synthesize PET pictures from Magnetic Resonance (MR) images, a novel approach for cross-modal synthesis of medical images is thus suggested. The foundation of this approach is an enhanced Edge-aware Generative Adversarial Network (Ea-GAN), which integrates an edge detector into the GAN framework to better capture local texture and edge information in the pictures. The Convolutional Block Attention Module (CBAM) is added in the generator portion of the GAN to prioritize important characteristics in the pictures. In order to improve the Ea-GAN discriminator, its receptive field is shrunk to concentrate more on the tiny features of brain tissue in the pictures, boosting the generator’s performance. The edge loss between actual PET pictures and synthetic PET images is also included into the algorithm’s loss function, further enhancing the generator’s performance. The suggested PET image synthesis algorithm, which is based on the enhanced Ea-GAN, outperforms different current approaches in terms of both quantitative and qualitative assessments, according to experimental findings. The architecture of the brain tissue are effectively preserved in the synthetic PET pictures, which also aesthetically nearly resemble genuine images.