{"title":"Improvement to Naive Loss Functions with Outlier Identifier","authors":"Qizhe Gao, Yifei Jiang, Shengyuan Wang","doi":"10.1109/TOCS50858.2020.9339696","DOIUrl":null,"url":null,"abstract":"We present a new Loss function, Loss with Outlier Identifier (LOI), a technique that produces a more robust calculation of prediction loss in Machine Learning fields and limits training time and procedures to minimum extend. LOI is designed based on the advantages of several well-known algorithm while compensating their disadvantages through interdisciplinary techniques. We show that by add two free parameters that do not require extra training, LOI is ensured to be continuous and derivable at all points and thus can be minimized through normal Gradient Descent algorithm. This function can be used to provide a more reliable loss for model training and thus produce a better model overall.","PeriodicalId":373862,"journal":{"name":"2020 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TOCS50858.2020.9339696","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
We present a new Loss function, Loss with Outlier Identifier (LOI), a technique that produces a more robust calculation of prediction loss in Machine Learning fields and limits training time and procedures to minimum extend. LOI is designed based on the advantages of several well-known algorithm while compensating their disadvantages through interdisciplinary techniques. We show that by add two free parameters that do not require extra training, LOI is ensured to be continuous and derivable at all points and thus can be minimized through normal Gradient Descent algorithm. This function can be used to provide a more reliable loss for model training and thus produce a better model overall.
我们提出了一个新的损失函数,Loss with Outlier Identifier (LOI),这是一种在机器学习领域产生更鲁棒的预测损失计算的技术,并将训练时间和过程限制在最小范围内。LOI是在借鉴几种知名算法优点的基础上设计的,同时通过跨学科的技术来弥补它们的不足。我们证明,通过添加两个不需要额外训练的自由参数,可以保证LOI在所有点上连续且可导,从而可以通过正态梯度下降算法最小化。这个函数可以为模型训练提供一个更可靠的损失,从而产生一个更好的模型。