带离群值标识的朴素损失函数的改进

Qizhe Gao, Yifei Jiang, Shengyuan Wang
{"title":"带离群值标识的朴素损失函数的改进","authors":"Qizhe Gao, Yifei Jiang, Shengyuan Wang","doi":"10.1109/TOCS50858.2020.9339696","DOIUrl":null,"url":null,"abstract":"We present a new Loss function, Loss with Outlier Identifier (LOI), a technique that produces a more robust calculation of prediction loss in Machine Learning fields and limits training time and procedures to minimum extend. LOI is designed based on the advantages of several well-known algorithm while compensating their disadvantages through interdisciplinary techniques. We show that by add two free parameters that do not require extra training, LOI is ensured to be continuous and derivable at all points and thus can be minimized through normal Gradient Descent algorithm. This function can be used to provide a more reliable loss for model training and thus produce a better model overall.","PeriodicalId":373862,"journal":{"name":"2020 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Improvement to Naive Loss Functions with Outlier Identifier\",\"authors\":\"Qizhe Gao, Yifei Jiang, Shengyuan Wang\",\"doi\":\"10.1109/TOCS50858.2020.9339696\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a new Loss function, Loss with Outlier Identifier (LOI), a technique that produces a more robust calculation of prediction loss in Machine Learning fields and limits training time and procedures to minimum extend. LOI is designed based on the advantages of several well-known algorithm while compensating their disadvantages through interdisciplinary techniques. We show that by add two free parameters that do not require extra training, LOI is ensured to be continuous and derivable at all points and thus can be minimized through normal Gradient Descent algorithm. This function can be used to provide a more reliable loss for model training and thus produce a better model overall.\",\"PeriodicalId\":373862,\"journal\":{\"name\":\"2020 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TOCS50858.2020.9339696\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TOCS50858.2020.9339696","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

我们提出了一个新的损失函数,Loss with Outlier Identifier (LOI),这是一种在机器学习领域产生更鲁棒的预测损失计算的技术,并将训练时间和过程限制在最小范围内。LOI是在借鉴几种知名算法优点的基础上设计的,同时通过跨学科的技术来弥补它们的不足。我们证明,通过添加两个不需要额外训练的自由参数,可以保证LOI在所有点上连续且可导,从而可以通过正态梯度下降算法最小化。这个函数可以为模型训练提供一个更可靠的损失,从而产生一个更好的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improvement to Naive Loss Functions with Outlier Identifier
We present a new Loss function, Loss with Outlier Identifier (LOI), a technique that produces a more robust calculation of prediction loss in Machine Learning fields and limits training time and procedures to minimum extend. LOI is designed based on the advantages of several well-known algorithm while compensating their disadvantages through interdisciplinary techniques. We show that by add two free parameters that do not require extra training, LOI is ensured to be continuous and derivable at all points and thus can be minimized through normal Gradient Descent algorithm. This function can be used to provide a more reliable loss for model training and thus produce a better model overall.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信