Insights and characterization of l1-norm based sparsity learning of a lexicographically encoded capacity vector for the Choquet integral

Titilope A. Adeyeba, Derek T. Anderson, T. Havens
{"title":"Insights and characterization of l1-norm based sparsity learning of a lexicographically encoded capacity vector for the Choquet integral","authors":"Titilope A. Adeyeba, Derek T. Anderson, T. Havens","doi":"10.1109/FUZZ-IEEE.2015.7337819","DOIUrl":null,"url":null,"abstract":"The aim of this paper is the simultaneous minimization of model error and model complexity for the Choquet integral. The Choquet integral is a generator function, that is, a parametric function that yields a wealth of aggregation operators based on the specifics of the underlying fuzzy measure (aka normal and monotonic capacity). It is often the case that we desire to learn an aggregation operator from data and the goal is to have the smallest possible sum of squared error (SSE) between the trained model and a set of labels or function values. However, we also desire to learn the “simplest” solution possible, viz., the model with the fewest number of inputs. Previous works focused on the use of l1-norm regularization of a lexicographically encoded capacity vector relative to the Choquet integral, describing how to carry out the procedure and demonstrating encouraging results. However, no characterization or insights into the capacity and integral were provided. Herein, we investigate the impact of l1-norm regularization of a lexicographically encoded capacity vector in terms of what capacities and aggregation operators it strives to induce in different scenarios. Ultimately, this provides insight into what the regularization is really doing and when to apply such a method. Synthetic experiments are performed to illustrate the remarks, propositions, and concepts put forth.","PeriodicalId":185191,"journal":{"name":"2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FUZZ-IEEE.2015.7337819","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

The aim of this paper is the simultaneous minimization of model error and model complexity for the Choquet integral. The Choquet integral is a generator function, that is, a parametric function that yields a wealth of aggregation operators based on the specifics of the underlying fuzzy measure (aka normal and monotonic capacity). It is often the case that we desire to learn an aggregation operator from data and the goal is to have the smallest possible sum of squared error (SSE) between the trained model and a set of labels or function values. However, we also desire to learn the “simplest” solution possible, viz., the model with the fewest number of inputs. Previous works focused on the use of l1-norm regularization of a lexicographically encoded capacity vector relative to the Choquet integral, describing how to carry out the procedure and demonstrating encouraging results. However, no characterization or insights into the capacity and integral were provided. Herein, we investigate the impact of l1-norm regularization of a lexicographically encoded capacity vector in terms of what capacities and aggregation operators it strives to induce in different scenarios. Ultimately, this provides insight into what the regularization is really doing and when to apply such a method. Synthetic experiments are performed to illustrate the remarks, propositions, and concepts put forth.
Choquet积分的字典编码容量向量的基于11范数的稀疏性学习的见解和表征
本文的目的是使Choquet积分的模型误差和模型复杂度同时最小化。Choquet积分是一个生成函数,也就是说,它是一个参数函数,根据潜在模糊度量(又名正规和单调容量)的具体情况产生大量的聚合算子。通常情况下,我们希望从数据中学习聚合操作符,目标是在训练模型和一组标签或函数值之间具有尽可能小的平方误差和(SSE)。然而,我们也希望学习“最简单”的解决方案,即具有最少输入数量的模型。先前的工作集中在使用相对于Choquet积分的字典编码容量向量的11范数正则化,描述了如何执行该过程并展示了令人鼓舞的结果。然而,没有提供对容量和积分的表征或见解。本文研究了按字典顺序编码的容量向量的11范数正则化对不同场景下容量和聚合算子的影响。最终,这提供了对正则化真正在做什么以及何时应用这种方法的深入了解。通过综合实验来说明所提出的评论、命题和概念。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信