{"title":"Adaptive Ensemble Learning With Category-Aware Attention and Local Contrastive Loss","authors":"Hongrui Guo;Tianqi Sun;Hongzhi Liu;Zhonghai Wu","doi":"10.1109/TCSVT.2024.3479313","DOIUrl":null,"url":null,"abstract":"Machine learning techniques can help us deal with many difficult problems in the real world. Proper ensemble of multiple learners can improve the predictive performance. Each base learner usually has different predictive ability on different instances or in different instance regions. However, existing ensemble methods often assume that base learners have the same predictive ability for all instances without consideration of the specificity of different instances or categories. To address these issues, we propose an adaptive ensemble learning framework with category-aware attention and local contrastive loss, which can adaptively adjust the ensemble weight of each base classifier according to the characteristics of each instance. Specifically, we design a category-aware attention mechanism to learn the predictive ability of each classifier on different categories. Furthermore, we design a local contrastive loss to capture local similarities between instances and further enhance the model’s ability to discern fine-grained patterns in the data. Extensive experiments on 20 public datasets demonstrate the effectiveness of the proposed model.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 2","pages":"1224-1236"},"PeriodicalIF":8.3000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10717437/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning techniques can help us deal with many difficult problems in the real world. Proper ensemble of multiple learners can improve the predictive performance. Each base learner usually has different predictive ability on different instances or in different instance regions. However, existing ensemble methods often assume that base learners have the same predictive ability for all instances without consideration of the specificity of different instances or categories. To address these issues, we propose an adaptive ensemble learning framework with category-aware attention and local contrastive loss, which can adaptively adjust the ensemble weight of each base classifier according to the characteristics of each instance. Specifically, we design a category-aware attention mechanism to learn the predictive ability of each classifier on different categories. Furthermore, we design a local contrastive loss to capture local similarities between instances and further enhance the model’s ability to discern fine-grained patterns in the data. Extensive experiments on 20 public datasets demonstrate the effectiveness of the proposed model.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.