Info_PCA: A Hybrid Technique to Improve Accuracy by Dimensionality Reduction

Surabhi Lingwal
{"title":"Info_PCA: A Hybrid Technique to Improve Accuracy by Dimensionality Reduction","authors":"Surabhi Lingwal","doi":"10.17762/ITII.V9I2.370","DOIUrl":null,"url":null,"abstract":"Principal Component Analysis and Shannon Entropy are some of the most widely used methods for feature extraction and selection. PCA reduces the data to a new subspace with low dimensions by calculating the eigenvectors from eigenvalues out of a covariance matrix and thereby reduces the features to a smaller number capturing the significant information. Shannon entropy is based on probability distribution to calculate the significant information content. Information gain shows the importance of a given attribute in the set of feature vectors. The paper has introduced a hybrid technique Info_PCA which captures the properties of Information gain and PCA that overall reduces the dimensionality and thereby increases the accuracy of the machine learning technique. It also demonstrates the individual implementation of Information gain for feature selection and PCA for dimensionality reduction on two different datasets collected from the UCI machine learning repository. One of the major aims is to determine the important attributes in a given set of training feature vectors to differentiate the classes. The paper has shown a comparative analysis on the classification accuracy obtained by the application of Information Gain, PCA and Info_PCA applied individually on the two different datasets for feature extraction followed by ANN classifier where the results of hybrid technique Info_PCA achieves maximum accuracy and minimum loss in comparison to other feature extraction techniques.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Technology in Industry","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17762/ITII.V9I2.370","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Principal Component Analysis and Shannon Entropy are some of the most widely used methods for feature extraction and selection. PCA reduces the data to a new subspace with low dimensions by calculating the eigenvectors from eigenvalues out of a covariance matrix and thereby reduces the features to a smaller number capturing the significant information. Shannon entropy is based on probability distribution to calculate the significant information content. Information gain shows the importance of a given attribute in the set of feature vectors. The paper has introduced a hybrid technique Info_PCA which captures the properties of Information gain and PCA that overall reduces the dimensionality and thereby increases the accuracy of the machine learning technique. It also demonstrates the individual implementation of Information gain for feature selection and PCA for dimensionality reduction on two different datasets collected from the UCI machine learning repository. One of the major aims is to determine the important attributes in a given set of training feature vectors to differentiate the classes. The paper has shown a comparative analysis on the classification accuracy obtained by the application of Information Gain, PCA and Info_PCA applied individually on the two different datasets for feature extraction followed by ANN classifier where the results of hybrid technique Info_PCA achieves maximum accuracy and minimum loss in comparison to other feature extraction techniques.
Info_PCA:一种通过降维提高准确率的混合技术
主成分分析和香农熵是一些最广泛使用的特征提取和选择方法。PCA通过协方差矩阵的特征值计算特征向量,将数据减少到一个新的低维子空间,从而将特征减少到更少的数量,从而捕获重要信息。香农熵是基于概率分布来计算有效信息含量。信息增益表示给定属性在特征向量集中的重要性。本文介绍了一种混合技术Info_PCA,它捕获了信息增益和PCA的特性,总体上降低了维数,从而提高了机器学习技术的准确性。它还演示了从UCI机器学习存储库中收集的两个不同数据集上用于特征选择的信息增益和用于降维的PCA的单独实现。其中一个主要目的是在给定的训练特征向量集中确定重要的属性来区分类别。本文对比分析了分别应用Information Gain、PCA和Info_PCA在两种不同的数据集上进行特征提取,再进行ANN分类器的分类精度,其中混合技术Info_PCA的结果与其他特征提取技术相比,准确率最高,损失最小。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Technology in Industry
Information Technology in Industry COMPUTER SCIENCE, SOFTWARE ENGINEERING-
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信