A New Feature Hashing Approach Based on Term Weight for Dimensional Reduction

Abubakar Ado, N. Samsudin, M. M. Deris
{"title":"A New Feature Hashing Approach Based on Term Weight for Dimensional Reduction","authors":"Abubakar Ado, N. Samsudin, M. M. Deris","doi":"10.1109/ICOTEN52080.2021.9493447","DOIUrl":null,"url":null,"abstract":"Machine learning models usually face a problem when encountered with large scale text dataset. Such kind of dataset produces sparse features of a high-dimensional, which makes it complex or infeasible to process by the learning models. Feature hashing is a dimensional reduction technique commonly used in the pre-processing phase to overcome the aforementioned problem. However, models performance are negatively affected due to the inherited so-called collisions that occur during the hashing process. In this study, we proposed a new Feature hashing approach that hashes similar features to the same bin based on their weight known as \"weight term\" while minimizing certain collisions. The approach effectively reduces the collisions between dissimilar features, thus improving model performance. The experiment results conducted on binary and multi-class classification datasets with a very high number of sparse features show that the proposed approach achieved competitive performance compared with the conventional FH.","PeriodicalId":308802,"journal":{"name":"2021 International Congress of Advanced Technology and Engineering (ICOTEN)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Congress of Advanced Technology and Engineering (ICOTEN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOTEN52080.2021.9493447","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Machine learning models usually face a problem when encountered with large scale text dataset. Such kind of dataset produces sparse features of a high-dimensional, which makes it complex or infeasible to process by the learning models. Feature hashing is a dimensional reduction technique commonly used in the pre-processing phase to overcome the aforementioned problem. However, models performance are negatively affected due to the inherited so-called collisions that occur during the hashing process. In this study, we proposed a new Feature hashing approach that hashes similar features to the same bin based on their weight known as "weight term" while minimizing certain collisions. The approach effectively reduces the collisions between dissimilar features, thus improving model performance. The experiment results conducted on binary and multi-class classification datasets with a very high number of sparse features show that the proposed approach achieved competitive performance compared with the conventional FH.
基于项权值的特征哈希降维新方法
机器学习模型在遇到大规模文本数据集时通常会面临一个问题。这类数据集产生了高维的稀疏特征,使得学习模型的处理变得复杂或不可行的。特征哈希是一种通常用于预处理阶段的降维技术,以克服上述问题。然而,由于在散列过程中发生的继承的所谓碰撞,模型性能受到负面影响。在本研究中,我们提出了一种新的特征哈希方法,该方法基于相同bin的相似特征的权重(称为“权重项”)进行哈希,同时最小化某些冲突。该方法有效地减少了不同特征之间的冲突,从而提高了模型的性能。在具有大量稀疏特征的二分类和多分类数据集上进行的实验结果表明,与传统的FH相比,该方法取得了较好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信