{"title":"High dimensional datasets using hadoop mahout machine learning algorithms","authors":"A. Srinivasulu, C. SubbaRao, K. Y. Jeevan","doi":"10.1109/ICCCT2.2014.7066727","DOIUrl":null,"url":null,"abstract":"Summary only from given. High dimensional data concerns large-volume, complex, growing data sets with multiple, and autonomous sources. As the Data increasing very drastically day-to-day, it is a major issue to manage and organize the data very efficiently. This emerged the necessity of machine learning techniques. With the Fast development of Networking, data storage and the data collection capacity, Machine learning cluster algorithms are now rapidly expanding in all science and engineering domains such as Pattern recognition, data mining, bioinformatics, and recommendation systems. So as to support the scalable machine learning framework with MapReduce and Hadoop support, we are using Apache Mahout to manage the High Voluminous data. Various Cluster problems such as Cluster Tendency, Partitioning, Cluster Validity, and Cluster Performance can be easily overcome by Mahout clustering algorithms. Mahout manages data in four steps i.e., fetching data, text mining, clustering, classification and collaborative filtering. In the proposed approach, various datatypes such as Numeric, Characters and Image datasets are classified in the several categories i.e., Collaborative Filtering, Clustering, Classification or Frequent Item set Mining. Some of the Pre-clustering techniques are also implemented such as EDBE, ECCE, and Extended Co-VAT. A non-Hadoop Clusternamed Taste recommendation Frame work is also implemented.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"15 1","pages":"1-1"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCT2.2014.7066727","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Summary only from given. High dimensional data concerns large-volume, complex, growing data sets with multiple, and autonomous sources. As the Data increasing very drastically day-to-day, it is a major issue to manage and organize the data very efficiently. This emerged the necessity of machine learning techniques. With the Fast development of Networking, data storage and the data collection capacity, Machine learning cluster algorithms are now rapidly expanding in all science and engineering domains such as Pattern recognition, data mining, bioinformatics, and recommendation systems. So as to support the scalable machine learning framework with MapReduce and Hadoop support, we are using Apache Mahout to manage the High Voluminous data. Various Cluster problems such as Cluster Tendency, Partitioning, Cluster Validity, and Cluster Performance can be easily overcome by Mahout clustering algorithms. Mahout manages data in four steps i.e., fetching data, text mining, clustering, classification and collaborative filtering. In the proposed approach, various datatypes such as Numeric, Characters and Image datasets are classified in the several categories i.e., Collaborative Filtering, Clustering, Classification or Frequent Item set Mining. Some of the Pre-clustering techniques are also implemented such as EDBE, ECCE, and Extended Co-VAT. A non-Hadoop Clusternamed Taste recommendation Frame work is also implemented.