Shengkai Liu , Jun Miao , Yuanhua Qiao , Hainan Wang
{"title":"TPWGAN: Wavelet-aware text prior guided super-resolution for scene text images","authors":"Shengkai Liu , Jun Miao , Yuanhua Qiao , Hainan Wang","doi":"10.1016/j.imavis.2025.105707","DOIUrl":null,"url":null,"abstract":"<div><div>Scene text image super-resolution (STISR) is crucial for improving the readability and recognition accuracy of low-resolution text images. Many previous methods have incorporated text prior information, such as character sequences or recognition features, into super-resolution frameworks. However, existing methods struggle to recover fine-grained text structures, often introducing artifacts or blurry edges due to insufficient high-frequency (HF) modeling and suboptimal use of text priors. Although some recent approaches incorporate wavelet-domain losses into the generator, they typically retain RGB-domain losses during adversarial training, limiting their ability to distinguish authentic text details from artifacts. To address this, we propose TPWGAN, a GAN-based STISR framework that introduces wavelet-domain losses in both the generator and discriminator. The generator is trained with fidelity losses on the HF wavelet subbands to enhance sensitivity to stroke-level variations, while the discriminator processes HF wavelet subbands fused with binary text region masks via a spatial attention mechanism, enabling semantically guided frequency-aware discrimination. Experiments on the TextZoom dataset and several real-world benchmarks show that TPWGAN achieves consistent improvements in visual quality and text recognition, particularly for challenging text instances with distortions or low resolution.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"162 ","pages":"Article 105707"},"PeriodicalIF":4.2000,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625002951","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Scene text image super-resolution (STISR) is crucial for improving the readability and recognition accuracy of low-resolution text images. Many previous methods have incorporated text prior information, such as character sequences or recognition features, into super-resolution frameworks. However, existing methods struggle to recover fine-grained text structures, often introducing artifacts or blurry edges due to insufficient high-frequency (HF) modeling and suboptimal use of text priors. Although some recent approaches incorporate wavelet-domain losses into the generator, they typically retain RGB-domain losses during adversarial training, limiting their ability to distinguish authentic text details from artifacts. To address this, we propose TPWGAN, a GAN-based STISR framework that introduces wavelet-domain losses in both the generator and discriminator. The generator is trained with fidelity losses on the HF wavelet subbands to enhance sensitivity to stroke-level variations, while the discriminator processes HF wavelet subbands fused with binary text region masks via a spatial attention mechanism, enabling semantically guided frequency-aware discrimination. Experiments on the TextZoom dataset and several real-world benchmarks show that TPWGAN achieves consistent improvements in visual quality and text recognition, particularly for challenging text instances with distortions or low resolution.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.