Feiyang Jia, Zhineng Chen, Ziying Song, Lin Liu, Caiyan Jia
{"title":"CWT-Net:使用基于小波的跨尺度变换器实现组织病理学图像的超级分辨率","authors":"Feiyang Jia, Zhineng Chen, Ziying Song, Lin Liu, Caiyan Jia","doi":"arxiv-2409.07092","DOIUrl":null,"url":null,"abstract":"Super-resolution (SR) aims to enhance the quality of low-resolution images\nand has been widely applied in medical imaging. We found that the design\nprinciples of most existing methods are influenced by SR tasks based on\nreal-world images and do not take into account the significance of the\nmulti-level structure in pathological images, even if they can achieve\nrespectable objective metric evaluations. In this work, we delve into two\nsuper-resolution working paradigms and propose a novel network called CWT-Net,\nwhich leverages cross-scale image wavelet transform and Transformer\narchitecture. Our network consists of two branches: one dedicated to learning\nsuper-resolution and the other to high-frequency wavelet features. To generate\nhigh-resolution histopathology images, the Transformer module shares and fuses\nfeatures from both branches at various stages. Notably, we have designed a\nspecialized wavelet reconstruction module to effectively enhance the wavelet\ndomain features and enable the network to operate in different modes, allowing\nfor the introduction of additional relevant information from cross-scale\nimages. Our experimental results demonstrate that our model significantly\noutperforms state-of-the-art methods in both performance and visualization\nevaluations and can substantially boost the accuracy of image diagnostic\nnetworks.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CWT-Net: Super-resolution of Histopathology Images Using a Cross-scale Wavelet-based Transformer\",\"authors\":\"Feiyang Jia, Zhineng Chen, Ziying Song, Lin Liu, Caiyan Jia\",\"doi\":\"arxiv-2409.07092\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Super-resolution (SR) aims to enhance the quality of low-resolution images\\nand has been widely applied in medical imaging. We found that the design\\nprinciples of most existing methods are influenced by SR tasks based on\\nreal-world images and do not take into account the significance of the\\nmulti-level structure in pathological images, even if they can achieve\\nrespectable objective metric evaluations. In this work, we delve into two\\nsuper-resolution working paradigms and propose a novel network called CWT-Net,\\nwhich leverages cross-scale image wavelet transform and Transformer\\narchitecture. Our network consists of two branches: one dedicated to learning\\nsuper-resolution and the other to high-frequency wavelet features. To generate\\nhigh-resolution histopathology images, the Transformer module shares and fuses\\nfeatures from both branches at various stages. Notably, we have designed a\\nspecialized wavelet reconstruction module to effectively enhance the wavelet\\ndomain features and enable the network to operate in different modes, allowing\\nfor the introduction of additional relevant information from cross-scale\\nimages. Our experimental results demonstrate that our model significantly\\noutperforms state-of-the-art methods in both performance and visualization\\nevaluations and can substantially boost the accuracy of image diagnostic\\nnetworks.\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07092\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07092","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
超分辨率(SR)旨在提高低分辨率图像的质量,已被广泛应用于医学成像领域。我们发现,大多数现有方法的设计原则都受到基于真实世界图像的 SR 任务的影响,没有考虑到病理图像中多层次结构的重要性,即使它们能实现可观的客观度量评估。在这项工作中,我们深入研究了两种超分辨率工作模式,并提出了一种名为 CWT-Net 的新型网络,它利用了跨尺度图像小波变换和变换器架构。我们的网络由两个分支组成:一个专门学习超分辨率,另一个专门学习高频小波特征。为了生成高分辨率的组织病理学图像,变换器模块在不同阶段共享和融合来自两个分支的特征。值得注意的是,我们设计了专门的小波重构模块,以有效增强小波域特征,并使网络以不同模式运行,允许从跨尺度图像中引入额外的相关信息。实验结果表明,我们的模型在性能和可视化评估方面都明显优于最先进的方法,可以大大提高图像诊断网络的准确性。
CWT-Net: Super-resolution of Histopathology Images Using a Cross-scale Wavelet-based Transformer
Super-resolution (SR) aims to enhance the quality of low-resolution images
and has been widely applied in medical imaging. We found that the design
principles of most existing methods are influenced by SR tasks based on
real-world images and do not take into account the significance of the
multi-level structure in pathological images, even if they can achieve
respectable objective metric evaluations. In this work, we delve into two
super-resolution working paradigms and propose a novel network called CWT-Net,
which leverages cross-scale image wavelet transform and Transformer
architecture. Our network consists of two branches: one dedicated to learning
super-resolution and the other to high-frequency wavelet features. To generate
high-resolution histopathology images, the Transformer module shares and fuses
features from both branches at various stages. Notably, we have designed a
specialized wavelet reconstruction module to effectively enhance the wavelet
domain features and enable the network to operate in different modes, allowing
for the introduction of additional relevant information from cross-scale
images. Our experimental results demonstrate that our model significantly
outperforms state-of-the-art methods in both performance and visualization
evaluations and can substantially boost the accuracy of image diagnostic
networks.