{"title":"Look One and More: Distilling Hybrid Order Relational Knowledge for Cross-Resolution Image Recognition","authors":"Shiming Ge, Kangkai Zhang, Haolin Liu, Yingying Hua, Shengwei Zhao, Xin Jin, Hao Wen","doi":"arxiv-2409.05384","DOIUrl":null,"url":null,"abstract":"In spite of great success in many image recognition tasks achieved by recent\ndeep models, directly applying them to recognize low-resolution images may\nsuffer from low accuracy due to the missing of informative details during\nresolution degradation. However, these images are still recognizable for\nsubjects who are familiar with the corresponding high-resolution ones. Inspired\nby that, we propose a teacher-student learning approach to facilitate\nlow-resolution image recognition via hybrid order relational knowledge\ndistillation. The approach refers to three streams: the teacher stream is\npretrained to recognize high-resolution images in high accuracy, the student\nstream is learned to identify low-resolution images by mimicking the teacher's\nbehaviors, and the extra assistant stream is introduced as bridge to help\nknowledge transfer across the teacher to the student. To extract sufficient\nknowledge for reducing the loss in accuracy, the learning of student is\nsupervised with multiple losses, which preserves the similarities in various\norder relational structures. In this way, the capability of recovering missing\ndetails of familiar low-resolution images can be effectively enhanced, leading\nto a better knowledge transfer. Extensive experiments on metric learning,\nlow-resolution image classification and low-resolution face recognition tasks\nshow the effectiveness of our approach, while taking reduced models.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"50 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05384","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In spite of great success in many image recognition tasks achieved by recent
deep models, directly applying them to recognize low-resolution images may
suffer from low accuracy due to the missing of informative details during
resolution degradation. However, these images are still recognizable for
subjects who are familiar with the corresponding high-resolution ones. Inspired
by that, we propose a teacher-student learning approach to facilitate
low-resolution image recognition via hybrid order relational knowledge
distillation. The approach refers to three streams: the teacher stream is
pretrained to recognize high-resolution images in high accuracy, the student
stream is learned to identify low-resolution images by mimicking the teacher's
behaviors, and the extra assistant stream is introduced as bridge to help
knowledge transfer across the teacher to the student. To extract sufficient
knowledge for reducing the loss in accuracy, the learning of student is
supervised with multiple losses, which preserves the similarities in various
order relational structures. In this way, the capability of recovering missing
details of familiar low-resolution images can be effectively enhanced, leading
to a better knowledge transfer. Extensive experiments on metric learning,
low-resolution image classification and low-resolution face recognition tasks
show the effectiveness of our approach, while taking reduced models.