演算法

Jan Žižka, F. Dařena, Arnošt Svoboda
{"title":"演算法","authors":"Jan Žižka, F. Dařena, Arnošt Svoboda","doi":"10.1201/9780429469275-9","DOIUrl":null,"url":null,"abstract":"Let’s now look at the AdaBoost setup in more detail. • Loss ` = exp (exponential loss). It is also common to use the logistic loss ln(1 + exp(·)), but for simplicity we’ll use the standard choice. • Examples ((xi, yi))i=1 with xi ∈ X and yi ∈ {−1,+1}. The main thing to note is that X is just some opaque set, we are not assuming vector space structure, and can not form inner products 〈w, x〉. • Elementary hypotheses H = (hj)j=1, where hj : X → [−1,+1] for each j. Rather than interacting with examples in X directly, boosting algorithms embed them in a vector space via these functions H. For example, a vector v ∈ R is now interpreted as a linear combination of elements of H, and predictions on a new example x ∈ X are computed as x 7→ ∑","PeriodicalId":258194,"journal":{"name":"Text Mining with Machine Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":"{\"title\":\"Adaboost\",\"authors\":\"Jan Žižka, F. Dařena, Arnošt Svoboda\",\"doi\":\"10.1201/9780429469275-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Let’s now look at the AdaBoost setup in more detail. • Loss ` = exp (exponential loss). It is also common to use the logistic loss ln(1 + exp(·)), but for simplicity we’ll use the standard choice. • Examples ((xi, yi))i=1 with xi ∈ X and yi ∈ {−1,+1}. The main thing to note is that X is just some opaque set, we are not assuming vector space structure, and can not form inner products 〈w, x〉. • Elementary hypotheses H = (hj)j=1, where hj : X → [−1,+1] for each j. Rather than interacting with examples in X directly, boosting algorithms embed them in a vector space via these functions H. For example, a vector v ∈ R is now interpreted as a linear combination of elements of H, and predictions on a new example x ∈ X are computed as x 7→ ∑\",\"PeriodicalId\":258194,\"journal\":{\"name\":\"Text Mining with Machine Learning\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"29\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Text Mining with Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1201/9780429469275-9\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Text Mining with Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1201/9780429469275-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29

摘要

现在让我们更详细地看看AdaBoost的设置。•损失= exp(指数损失)。逻辑损失ln(1 + exp(·))也很常见,但为了简单起见,我们将使用标准选择。•示例((xi, yi))i=1,其中xi∈X, yi∈{−1,+1}。主要需要注意的是,X只是一个不透明的集合,我们没有假设向量空间结构,也不能形成内积< w, X >。•基本假设H = (hj)j=1,其中每个j的hj: X→[−1,+1]。增强算法不是直接与X中的示例交互,而是通过这些函数H将它们嵌入到向量空间中。例如,向量v∈R现在被解释为H元素的线性组合,并且对新示例X∈X的预测计算为X 7→∑
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adaboost
Let’s now look at the AdaBoost setup in more detail. • Loss ` = exp (exponential loss). It is also common to use the logistic loss ln(1 + exp(·)), but for simplicity we’ll use the standard choice. • Examples ((xi, yi))i=1 with xi ∈ X and yi ∈ {−1,+1}. The main thing to note is that X is just some opaque set, we are not assuming vector space structure, and can not form inner products 〈w, x〉. • Elementary hypotheses H = (hj)j=1, where hj : X → [−1,+1] for each j. Rather than interacting with examples in X directly, boosting algorithms embed them in a vector space via these functions H. For example, a vector v ∈ R is now interpreted as a linear combination of elements of H, and predictions on a new example x ∈ X are computed as x 7→ ∑
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信