Discriminative Additive Scale Loss for Deep Imbalanced Classification and Embedding

Zhao Zhang, Weiming Jiang, Yang Wang, Qiaolin Ye, Mingbo Zhao, Mingliang Xu, Meng Wang
{"title":"Discriminative Additive Scale Loss for Deep Imbalanced Classification and Embedding","authors":"Zhao Zhang, Weiming Jiang, Yang Wang, Qiaolin Ye, Mingbo Zhao, Mingliang Xu, Meng Wang","doi":"10.1109/ICDM51629.2021.00105","DOIUrl":null,"url":null,"abstract":"Real-world data in emerging applications may suffer from highly-skewed class imbalanced distribution, however how to deal with this kind of problem appropriately through deep learning needs further investigation. In this paper, we mainly propose a novel cross-entropy based loss function, referred to as Additive Scale Loss (ASL), for deep representation learning and imbalanced classification. To deal with the class imbalanced problem, ASL aims at increasing the loss in case of misclassification, which can avoid the superimposed loss values caused by the large amount of easily classified data in the unbalanced database to dominate the loss value of misclassified data. Moreover, in real-world applications, one data source may be used for multiple scenarios, such as classification and embedding learning, however training two separable models to handle these problems is costly, especially in deep learning area. To tackle this issue, we present and integrate a discriminative inter-class separation term into ASL, and propose a discriminative ASL (D-ASL), which can not only improve the classification performance, but also obtain discriminative representations simultaneously. The discriminative inter-class separation term is general, and can be easily integrated to other loss functions, such as CE and FL, as the byproducts. Finally, a new deep convolutional neural network equipped with D-ASL and a fully-connected (FC) layer is proposed, which can classify the imbalanced image data and obtain the discriminative representations at the same time. Extensive experimental results verified the superior performance of our method.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Data Mining (ICDM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDM51629.2021.00105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Real-world data in emerging applications may suffer from highly-skewed class imbalanced distribution, however how to deal with this kind of problem appropriately through deep learning needs further investigation. In this paper, we mainly propose a novel cross-entropy based loss function, referred to as Additive Scale Loss (ASL), for deep representation learning and imbalanced classification. To deal with the class imbalanced problem, ASL aims at increasing the loss in case of misclassification, which can avoid the superimposed loss values caused by the large amount of easily classified data in the unbalanced database to dominate the loss value of misclassified data. Moreover, in real-world applications, one data source may be used for multiple scenarios, such as classification and embedding learning, however training two separable models to handle these problems is costly, especially in deep learning area. To tackle this issue, we present and integrate a discriminative inter-class separation term into ASL, and propose a discriminative ASL (D-ASL), which can not only improve the classification performance, but also obtain discriminative representations simultaneously. The discriminative inter-class separation term is general, and can be easily integrated to other loss functions, such as CE and FL, as the byproducts. Finally, a new deep convolutional neural network equipped with D-ASL and a fully-connected (FC) layer is proposed, which can classify the imbalanced image data and obtain the discriminative representations at the same time. Extensive experimental results verified the superior performance of our method.
深度不平衡分类与嵌入的判别加性尺度损失
在新兴的应用中,现实世界的数据可能存在高度倾斜的类不平衡分布,但如何通过深度学习适当地处理这类问题需要进一步的研究。在本文中,我们主要提出了一种新的基于交叉熵的损失函数,称为加性尺度损失(ASL),用于深度表征学习和不平衡分类。针对类不平衡问题,ASL旨在增加误分类时的损失,避免不平衡数据库中大量易分类数据所造成的叠加损失值,主导误分类数据的损失值。此外,在现实应用中,一个数据源可以用于多种场景,例如分类和嵌入学习,然而训练两个可分离的模型来处理这些问题是昂贵的,特别是在深度学习领域。为了解决这一问题,我们在ASL中引入并整合了一个判别类间分离术语,提出了一种判别式ASL (D-ASL),它不仅可以提高分类性能,而且可以同时获得判别表示。判别类间分离项是通用的,可以很容易地积分到其他损失函数,如CE和FL,作为副产物。最后,提出了一种新的基于D-ASL和全连接层(FC)的深度卷积神经网络,该网络可以对不平衡图像数据进行分类,同时获得判别表示。大量的实验结果验证了该方法的优越性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信