一种基于spark的大规模特征选择滤波方法

Reine Marie Ndéla Marone, Fodé Camara, S. Ndiaye
{"title":"一种基于spark的大规模特征选择滤波方法","authors":"Reine Marie Ndéla Marone, Fodé Camara, S. Ndiaye","doi":"10.1109/ISCMI.2017.8279590","DOIUrl":null,"url":null,"abstract":"Recently, enormous volumes of data are generated in information systems. That's why data mining area is facing new challenges of transforming this “big data” into useful knowledge. In fact, “big data” relies low density of information (low data quality) and data redundancy, which negatively affect the data mining process. Therefore, when the number of variables describing the data is high, features selection methods are crucial for selecting relevant data. Features selection is the process of identifying the most relevant variables and removing those are redundant and irrelevant. In this paper, we propose a parallel, scalable feature selection algorithm based on mRMR (Max-Relevance and Min-Redundancy) in Spark, an in-memory parallel computing framework specialized in computation for large distributed datasets. Our experiments using real-world data of high dimensionality demonstrated that our proposition scale well and efficiently with large datasets.","PeriodicalId":119111,"journal":{"name":"2017 IEEE 4th International Conference on Soft Computing & Machine Intelligence (ISCMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A large-scale filter method for feature selection based on spark\",\"authors\":\"Reine Marie Ndéla Marone, Fodé Camara, S. Ndiaye\",\"doi\":\"10.1109/ISCMI.2017.8279590\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, enormous volumes of data are generated in information systems. That's why data mining area is facing new challenges of transforming this “big data” into useful knowledge. In fact, “big data” relies low density of information (low data quality) and data redundancy, which negatively affect the data mining process. Therefore, when the number of variables describing the data is high, features selection methods are crucial for selecting relevant data. Features selection is the process of identifying the most relevant variables and removing those are redundant and irrelevant. In this paper, we propose a parallel, scalable feature selection algorithm based on mRMR (Max-Relevance and Min-Redundancy) in Spark, an in-memory parallel computing framework specialized in computation for large distributed datasets. Our experiments using real-world data of high dimensionality demonstrated that our proposition scale well and efficiently with large datasets.\",\"PeriodicalId\":119111,\"journal\":{\"name\":\"2017 IEEE 4th International Conference on Soft Computing & Machine Intelligence (ISCMI)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE 4th International Conference on Soft Computing & Machine Intelligence (ISCMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCMI.2017.8279590\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 4th International Conference on Soft Computing & Machine Intelligence (ISCMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCMI.2017.8279590","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

近年来,信息系统中产生了大量的数据。这就是为什么数据挖掘领域面临着将这些“大数据”转化为有用知识的新挑战。事实上,“大数据”依赖于低信息密度(低数据质量)和数据冗余,这对数据挖掘过程产生了负面影响。因此,当描述数据的变量数量较多时,特征选择方法对于选择相关数据至关重要。特征选择是识别最相关变量并删除冗余和不相关变量的过程。在本文中,我们提出了一种基于Spark中的mRMR(最大相关性和最小冗余)的并行可扩展特征选择算法,Spark是一种专门用于大型分布式数据集计算的内存并行计算框架。我们使用真实世界的高维数据进行的实验表明,我们的命题在大型数据集上可以很好地有效扩展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A large-scale filter method for feature selection based on spark
Recently, enormous volumes of data are generated in information systems. That's why data mining area is facing new challenges of transforming this “big data” into useful knowledge. In fact, “big data” relies low density of information (low data quality) and data redundancy, which negatively affect the data mining process. Therefore, when the number of variables describing the data is high, features selection methods are crucial for selecting relevant data. Features selection is the process of identifying the most relevant variables and removing those are redundant and irrelevant. In this paper, we propose a parallel, scalable feature selection algorithm based on mRMR (Max-Relevance and Min-Redundancy) in Spark, an in-memory parallel computing framework specialized in computation for large distributed datasets. Our experiments using real-world data of high dimensionality demonstrated that our proposition scale well and efficiently with large datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信