Yishuo Zhang, Nayyar Zaidi, Jiahui Zhou, Tao Wang, Gang Li
{"title":"针对大规模分类数据的有效可解释学习","authors":"Yishuo Zhang, Nayyar Zaidi, Jiahui Zhou, Tao Wang, Gang Li","doi":"10.1007/s10618-024-01030-1","DOIUrl":null,"url":null,"abstract":"<p>Large scale categorical datasets are ubiquitous in machine learning and the success of most deployed machine learning models rely on how effectively the features are engineered. For large-scale datasets, parametric methods are generally used, among which three strategies for feature engineering are quite common. The first strategy focuses on managing the breadth (or width) of a network, e.g., generalized linear models (aka. <span>wide learning</span>). The second strategy focuses on the depth of a network, e.g., Artificial Neural networks or <span>ANN</span> (aka. <span>deep learning</span>). The third strategy relies on factorizing the interaction terms, e.g., Factorization Machines (aka. <span>factorized learning</span>). Each of these strategies brings its own advantages and disadvantages. Recently, it has been shown that for categorical data, combination of various strategies leads to excellent results. For example, <span>WD</span>-Learning, <span>xdeepFM</span>, etc., leads to state-of-the-art results. Following the trend, in this work, we have proposed another learning framework—<span>WBDF</span>-Learning, based on the combination of <span>wide</span>, <span>deep</span>, <span>factorization</span>, and a newly introduced component named <span>Broad Interaction network</span> (<span>BIN</span>). <span>BIN</span> is in the form of a Bayesian network classifier whose structure is learned apriori, and parameters are learned by optimizing a joint objective function along with <span>wide</span>, <span>deep</span> and <span>factorized</span> parts. We denote the learning of <span>BIN</span> parameters as <span>broad learning</span>. Additionally, the parameters of <span>BIN</span> are constrained to be actual probabilities—therefore, it is extremely interpretable. Furthermore, one can sample or generate data from <span>BIN</span>, which can facilitate learning and provides a framework for <i>knowledge-guided machine learning</i>. We demonstrate that our proposed framework possesses the resilience to maintain excellent classification performance when confronted with biased datasets. We evaluate the efficacy of our framework in terms of classification performance on various benchmark large-scale categorical datasets and compare against state-of-the-art methods. It is shown that, <span>WBDF</span> framework (a) exhibits superior performance on classification tasks, (b) boasts outstanding interpretability and (c) demonstrates exceptional resilience and effectiveness in scenarios involving skewed distributions.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"22 1","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Effective interpretable learning for large-scale categorical data\",\"authors\":\"Yishuo Zhang, Nayyar Zaidi, Jiahui Zhou, Tao Wang, Gang Li\",\"doi\":\"10.1007/s10618-024-01030-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Large scale categorical datasets are ubiquitous in machine learning and the success of most deployed machine learning models rely on how effectively the features are engineered. For large-scale datasets, parametric methods are generally used, among which three strategies for feature engineering are quite common. The first strategy focuses on managing the breadth (or width) of a network, e.g., generalized linear models (aka. <span>wide learning</span>). The second strategy focuses on the depth of a network, e.g., Artificial Neural networks or <span>ANN</span> (aka. <span>deep learning</span>). The third strategy relies on factorizing the interaction terms, e.g., Factorization Machines (aka. <span>factorized learning</span>). Each of these strategies brings its own advantages and disadvantages. Recently, it has been shown that for categorical data, combination of various strategies leads to excellent results. For example, <span>WD</span>-Learning, <span>xdeepFM</span>, etc., leads to state-of-the-art results. Following the trend, in this work, we have proposed another learning framework—<span>WBDF</span>-Learning, based on the combination of <span>wide</span>, <span>deep</span>, <span>factorization</span>, and a newly introduced component named <span>Broad Interaction network</span> (<span>BIN</span>). <span>BIN</span> is in the form of a Bayesian network classifier whose structure is learned apriori, and parameters are learned by optimizing a joint objective function along with <span>wide</span>, <span>deep</span> and <span>factorized</span> parts. We denote the learning of <span>BIN</span> parameters as <span>broad learning</span>. Additionally, the parameters of <span>BIN</span> are constrained to be actual probabilities—therefore, it is extremely interpretable. Furthermore, one can sample or generate data from <span>BIN</span>, which can facilitate learning and provides a framework for <i>knowledge-guided machine learning</i>. We demonstrate that our proposed framework possesses the resilience to maintain excellent classification performance when confronted with biased datasets. We evaluate the efficacy of our framework in terms of classification performance on various benchmark large-scale categorical datasets and compare against state-of-the-art methods. It is shown that, <span>WBDF</span> framework (a) exhibits superior performance on classification tasks, (b) boasts outstanding interpretability and (c) demonstrates exceptional resilience and effectiveness in scenarios involving skewed distributions.</p>\",\"PeriodicalId\":55183,\"journal\":{\"name\":\"Data Mining and Knowledge Discovery\",\"volume\":\"22 1\",\"pages\":\"\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2024-05-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Data Mining and Knowledge Discovery\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10618-024-01030-1\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data Mining and Knowledge Discovery","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10618-024-01030-1","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
大规模分类数据集在机器学习中无处不在,大多数已部署的机器学习模型的成功取决于如何有效地设计特征。对于大规模数据集,一般会使用参数方法,其中有三种特征工程策略比较常见。第一种策略侧重于管理网络的广度(或宽度),例如广义线性模型(又称广义学习)。第二种策略侧重于网络的深度,例如人工神经网络或 ANN(又称深度学习)。第三种策略依赖于将交互项因子化,例如因子化机器(又称因子化学习)。这些策略各有利弊。最近的研究表明,对于分类数据来说,将各种策略结合起来可以取得很好的效果。例如,WD-Learning、xdeepFM 等都能带来最先进的结果。顺应这一趋势,我们在这项工作中提出了另一种学习框架--WBDF-Learning,它基于广度、深度、因式分解和新引入的名为 "广度交互网络(BIN)"的组件的组合。BIN 是贝叶斯网络分类器的一种形式,其结构是先验学习的,而参数则是通过优化联合目标函数以及广度、深度和因子化部分来学习的。我们将 BIN 参数的学习称为广义学习。此外,BIN 的参数受限于实际概率,因此具有极高的可解释性。此外,人们还可以从 BIN 中采样或生成数据,这可以促进学习,并为知识引导的机器学习提供了一个框架。我们证明,我们提出的框架具有强大的复原力,在面对有偏见的数据集时仍能保持出色的分类性能。我们评估了我们的框架在各种基准大规模分类数据集上的分类性能,并与最先进的方法进行了比较。结果表明,WBDF 框架(a)在分类任务中表现出卓越的性能,(b)拥有出色的可解释性,(c)在涉及偏斜分布的情况下表现出非凡的弹性和有效性。
Effective interpretable learning for large-scale categorical data
Large scale categorical datasets are ubiquitous in machine learning and the success of most deployed machine learning models rely on how effectively the features are engineered. For large-scale datasets, parametric methods are generally used, among which three strategies for feature engineering are quite common. The first strategy focuses on managing the breadth (or width) of a network, e.g., generalized linear models (aka. wide learning). The second strategy focuses on the depth of a network, e.g., Artificial Neural networks or ANN (aka. deep learning). The third strategy relies on factorizing the interaction terms, e.g., Factorization Machines (aka. factorized learning). Each of these strategies brings its own advantages and disadvantages. Recently, it has been shown that for categorical data, combination of various strategies leads to excellent results. For example, WD-Learning, xdeepFM, etc., leads to state-of-the-art results. Following the trend, in this work, we have proposed another learning framework—WBDF-Learning, based on the combination of wide, deep, factorization, and a newly introduced component named Broad Interaction network (BIN). BIN is in the form of a Bayesian network classifier whose structure is learned apriori, and parameters are learned by optimizing a joint objective function along with wide, deep and factorized parts. We denote the learning of BIN parameters as broad learning. Additionally, the parameters of BIN are constrained to be actual probabilities—therefore, it is extremely interpretable. Furthermore, one can sample or generate data from BIN, which can facilitate learning and provides a framework for knowledge-guided machine learning. We demonstrate that our proposed framework possesses the resilience to maintain excellent classification performance when confronted with biased datasets. We evaluate the efficacy of our framework in terms of classification performance on various benchmark large-scale categorical datasets and compare against state-of-the-art methods. It is shown that, WBDF framework (a) exhibits superior performance on classification tasks, (b) boasts outstanding interpretability and (c) demonstrates exceptional resilience and effectiveness in scenarios involving skewed distributions.
期刊介绍:
Advances in data gathering, storage, and distribution have created a need for computational tools and techniques to aid in data analysis. Data Mining and Knowledge Discovery in Databases (KDD) is a rapidly growing area of research and application that builds on techniques and theories from many fields, including statistics, databases, pattern recognition and learning, data visualization, uncertainty modelling, data warehousing and OLAP, optimization, and high performance computing.