CCDPlus: Towards Accurate Character to Character Distillation for Text Recognition

Tongkun Guan;Wei Shen;Xiaokang Yang
{"title":"CCDPlus: Towards Accurate Character to Character Distillation for Text Recognition","authors":"Tongkun Guan;Wei Shen;Xiaokang Yang","doi":"10.1109/TPAMI.2025.3533737","DOIUrl":null,"url":null,"abstract":"Existing scene text recognition methods leverage large-scale labeled synthetic data (LSD) to reduce reliance on labor-intensive annotation tasks and improve recognition capability in real-world scenarios. However, the emergence of a synth-to-real domain gap still limits their efficiency and robustness. Consequently, harvesting the meaningful intrinsic qualities of unlabeled real data (URD) is of great importance, given the prevalence of text-laden images. Toward the target, recent efforts have focused on pre-training on URD through sequence-to-sequence self-supervised learning, followed by fine-tuning on LSD via supervised learning. Nevertheless, they encounter three important issues: coarse representation learning units, inflexible data augmentation, and an emerging real-to-synth domain drift. To overcome these challenges, we propose CCDPlus, an accurate character-to-character distillation method for scene text recognition with a joint supervised and self-supervised learning framework. Specifically, tailored for text images, CCDPlus delineates the fine-grained character structures on URD as representation units by transferring knowledge learned from LSD online. Without requiring extra bounding box or pixel-level annotations, this process allows CCDPlus to enable character-to-character distillation flexibly with versatile data augmentation, which effectively extracts general real-world character-level feature representations. Meanwhile, the unified framework combines self-supervised learning on URD with supervised learning on LSD, effectively solving the domain inconsistency and enhancing the recognition performance. Extensive experiments demonstrate that CCDPlus outperforms previous state-of-the-art (SOTA) supervised, semi-supervised, and self-supervised methods by an average of 1.8%, 0.6%, and 1.1% on standard datasets, respectively. Additionally, it achieves a 6.1% improvement on the more challenging Union14M-L dataset.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 5","pages":"3546-3562"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10887029/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Existing scene text recognition methods leverage large-scale labeled synthetic data (LSD) to reduce reliance on labor-intensive annotation tasks and improve recognition capability in real-world scenarios. However, the emergence of a synth-to-real domain gap still limits their efficiency and robustness. Consequently, harvesting the meaningful intrinsic qualities of unlabeled real data (URD) is of great importance, given the prevalence of text-laden images. Toward the target, recent efforts have focused on pre-training on URD through sequence-to-sequence self-supervised learning, followed by fine-tuning on LSD via supervised learning. Nevertheless, they encounter three important issues: coarse representation learning units, inflexible data augmentation, and an emerging real-to-synth domain drift. To overcome these challenges, we propose CCDPlus, an accurate character-to-character distillation method for scene text recognition with a joint supervised and self-supervised learning framework. Specifically, tailored for text images, CCDPlus delineates the fine-grained character structures on URD as representation units by transferring knowledge learned from LSD online. Without requiring extra bounding box or pixel-level annotations, this process allows CCDPlus to enable character-to-character distillation flexibly with versatile data augmentation, which effectively extracts general real-world character-level feature representations. Meanwhile, the unified framework combines self-supervised learning on URD with supervised learning on LSD, effectively solving the domain inconsistency and enhancing the recognition performance. Extensive experiments demonstrate that CCDPlus outperforms previous state-of-the-art (SOTA) supervised, semi-supervised, and self-supervised methods by an average of 1.8%, 0.6%, and 1.1% on standard datasets, respectively. Additionally, it achieves a 6.1% improvement on the more challenging Union14M-L dataset.
CCDPlus:面向文本识别的精确字符到字符蒸馏
现有的场景文本识别方法利用大规模标记合成数据(LSD)来减少对劳动密集型标注任务的依赖,提高现实场景中的识别能力。然而,合成与真实域差距的出现仍然限制了它们的效率和鲁棒性。因此,考虑到文本加载图像的流行,获取未标记真实数据(URD)的有意义的内在质量非常重要。为了实现这一目标,最近的努力集中在通过序列到序列的自监督学习对URD进行预训练,然后通过监督学习对LSD进行微调。然而,他们遇到了三个重要的问题:粗糙的表示学习单元、不灵活的数据增强和正在出现的从真实到合成的域漂移。为了克服这些挑战,我们提出了CCDPlus,这是一种精确的字符到字符蒸馏方法,用于场景文本识别,具有联合监督和自监督学习框架。具体来说,CCDPlus为文本图像量身定制,通过转移从LSD在线学习的知识,将URD上的细粒度字符结构描绘为表示单元。在不需要额外的边界框或像素级注释的情况下,该过程允许CCDPlus灵活地使用通用数据增强实现字符到字符的蒸馏,从而有效地提取现实世界中一般的字符级特征表示。同时,统一的框架将URD上的自监督学习与LSD上的监督学习相结合,有效地解决了域不一致问题,提高了识别性能。大量的实验表明,CCDPlus在标准数据集上的性能分别比以前的最先进(SOTA)监督、半监督和自监督方法平均高出1.8%、0.6%和1.1%。此外,它在更具挑战性的Union14M-L数据集上实现了6.1%的改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信