{"title":"Online Knowledge Distillation for Efficient Action Recognition","authors":"Jiazheng Wang, Cunling Bian, Xian Zhou, Fan Lyu, Zhibin Niu, Wei Feng","doi":"10.1109/CCAI55564.2022.9807753","DOIUrl":null,"url":null,"abstract":"Existing skeleton-based action recognition methods require heavy computational resources for accurate predictions. One promising technique to obtain an accurate yet lightweight action recognition network is knowledge distillation (KD), which distills the knowledge from a powerful teacher model to a less-parameterized student model. However, existing distillation works in action recognition require a pre-trained teacher network and a two-stage learning procedure. In this work, we propose a novel Online Knowledge Distillation framework by distilling Action Recognition structure knowledge in a one-stage manner to improve the distillation efficiency, termed OKDAR. Specifically, OKDAR learns a single multi-branch network and acquires the predictions from each one, which is then assembled by a feature mix model as the implicit teacher network to teach each student in reverse. The effectiveness of our approach is demonstrated by extensive experiments on two common benchmarks, i.e., NTU-RGB+D 60 and NTU-RGB+D 120.","PeriodicalId":340195,"journal":{"name":"2022 IEEE 2nd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 2nd International Conference on Computer Communication and Artificial Intelligence (CCAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCAI55564.2022.9807753","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Existing skeleton-based action recognition methods require heavy computational resources for accurate predictions. One promising technique to obtain an accurate yet lightweight action recognition network is knowledge distillation (KD), which distills the knowledge from a powerful teacher model to a less-parameterized student model. However, existing distillation works in action recognition require a pre-trained teacher network and a two-stage learning procedure. In this work, we propose a novel Online Knowledge Distillation framework by distilling Action Recognition structure knowledge in a one-stage manner to improve the distillation efficiency, termed OKDAR. Specifically, OKDAR learns a single multi-branch network and acquires the predictions from each one, which is then assembled by a feature mix model as the implicit teacher network to teach each student in reverse. The effectiveness of our approach is demonstrated by extensive experiments on two common benchmarks, i.e., NTU-RGB+D 60 and NTU-RGB+D 120.