Implementation of Big-Data Applications Using Map Reduce Framework

K. Sahu, K. Bhatt, Anika Saxena, Kaptan Singh
{"title":"Implementation of Big-Data Applications Using Map Reduce Framework","authors":"K. Sahu, K. Bhatt, Anika Saxena, Kaptan Singh","doi":"10.18535/ijecs/v9i08.4504","DOIUrl":null,"url":null,"abstract":"Clustering As a result of the rapid development in cloud computing, it & fundamental to investigate the performance of extraordinary Hadoop MapReduce purposes and to realize the performance bottleneck in a cloud cluster that contributes to higher or diminish performance. It is usually primary to research the underlying hardware in cloud cluster servers to permit the optimization of program and hardware to achieve the highest performance feasible. Hadoop is founded on MapReduce, which is among the most popular programming items for huge knowledge analysis in a parallel computing environment. In this paper, we reward a particular efficiency analysis, characterization, and evaluation of Hadoop MapReduce Word Count utility. The main aim of this paper is to give implements of Hadoop map-reduce programming by giving a hands-on experience in developing Hadoop based Word-Count and Apriori application. Word count problem using Hadoop Map Reduce framework. The Apriori Algorithm has been used for finding frequent item set using Map Reduce framework.","PeriodicalId":231371,"journal":{"name":"International Journal of Engineering and Computer Science","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Engineering and Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18535/ijecs/v9i08.4504","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Clustering As a result of the rapid development in cloud computing, it & fundamental to investigate the performance of extraordinary Hadoop MapReduce purposes and to realize the performance bottleneck in a cloud cluster that contributes to higher or diminish performance. It is usually primary to research the underlying hardware in cloud cluster servers to permit the optimization of program and hardware to achieve the highest performance feasible. Hadoop is founded on MapReduce, which is among the most popular programming items for huge knowledge analysis in a parallel computing environment. In this paper, we reward a particular efficiency analysis, characterization, and evaluation of Hadoop MapReduce Word Count utility. The main aim of this paper is to give implements of Hadoop map-reduce programming by giving a hands-on experience in developing Hadoop based Word-Count and Apriori application. Word count problem using Hadoop Map Reduce framework. The Apriori Algorithm has been used for finding frequent item set using Map Reduce framework.
基于Map Reduce框架的大数据应用实现
随着云计算的快速发展,研究Hadoop MapReduce的性能和实现云集群中导致性能提高或降低的性能瓶颈是非常重要的。通常首先要研究云集群服务器中的底层硬件,以便对程序和硬件进行优化,以实现最高的性能。Hadoop建立在MapReduce的基础上,MapReduce是并行计算环境中用于海量知识分析的最流行的编程项目之一。在本文中,我们奖励了Hadoop MapReduce Word Count实用程序的特定效率分析,表征和评估。本文的主要目的是通过提供开发基于Word-Count和Apriori的Hadoop应用程序的实践经验来实现Hadoop map-reduce编程。使用Hadoop Map Reduce框架的字数统计问题。Apriori算法在Map Reduce框架下用于频繁项集的查找。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信