{"title":"CCDPlus: Towards Accurate Character to Character Distillation for Text Recognition","authors":"Tongkun Guan;Wei Shen;Xiaokang Yang","doi":"10.1109/TPAMI.2025.3533737","DOIUrl":null,"url":null,"abstract":"Existing scene text recognition methods leverage large-scale labeled synthetic data (LSD) to reduce reliance on labor-intensive annotation tasks and improve recognition capability in real-world scenarios. However, the emergence of a synth-to-real domain gap still limits their efficiency and robustness. Consequently, harvesting the meaningful intrinsic qualities of unlabeled real data (URD) is of great importance, given the prevalence of text-laden images. Toward the target, recent efforts have focused on pre-training on URD through sequence-to-sequence self-supervised learning, followed by fine-tuning on LSD via supervised learning. Nevertheless, they encounter three important issues: coarse representation learning units, inflexible data augmentation, and an emerging real-to-synth domain drift. To overcome these challenges, we propose CCDPlus, an accurate character-to-character distillation method for scene text recognition with a joint supervised and self-supervised learning framework. Specifically, tailored for text images, CCDPlus delineates the fine-grained character structures on URD as representation units by transferring knowledge learned from LSD online. Without requiring extra bounding box or pixel-level annotations, this process allows CCDPlus to enable character-to-character distillation flexibly with versatile data augmentation, which effectively extracts general real-world character-level feature representations. Meanwhile, the unified framework combines self-supervised learning on URD with supervised learning on LSD, effectively solving the domain inconsistency and enhancing the recognition performance. Extensive experiments demonstrate that CCDPlus outperforms previous state-of-the-art (SOTA) supervised, semi-supervised, and self-supervised methods by an average of 1.8%, 0.6%, and 1.1% on standard datasets, respectively. Additionally, it achieves a 6.1% improvement on the more challenging Union14M-L dataset.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 5","pages":"3546-3562"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10887029/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Existing scene text recognition methods leverage large-scale labeled synthetic data (LSD) to reduce reliance on labor-intensive annotation tasks and improve recognition capability in real-world scenarios. However, the emergence of a synth-to-real domain gap still limits their efficiency and robustness. Consequently, harvesting the meaningful intrinsic qualities of unlabeled real data (URD) is of great importance, given the prevalence of text-laden images. Toward the target, recent efforts have focused on pre-training on URD through sequence-to-sequence self-supervised learning, followed by fine-tuning on LSD via supervised learning. Nevertheless, they encounter three important issues: coarse representation learning units, inflexible data augmentation, and an emerging real-to-synth domain drift. To overcome these challenges, we propose CCDPlus, an accurate character-to-character distillation method for scene text recognition with a joint supervised and self-supervised learning framework. Specifically, tailored for text images, CCDPlus delineates the fine-grained character structures on URD as representation units by transferring knowledge learned from LSD online. Without requiring extra bounding box or pixel-level annotations, this process allows CCDPlus to enable character-to-character distillation flexibly with versatile data augmentation, which effectively extracts general real-world character-level feature representations. Meanwhile, the unified framework combines self-supervised learning on URD with supervised learning on LSD, effectively solving the domain inconsistency and enhancing the recognition performance. Extensive experiments demonstrate that CCDPlus outperforms previous state-of-the-art (SOTA) supervised, semi-supervised, and self-supervised methods by an average of 1.8%, 0.6%, and 1.1% on standard datasets, respectively. Additionally, it achieves a 6.1% improvement on the more challenging Union14M-L dataset.