{"title":"Managed N-gram language model based on Hadoop framework and a Hbase tables","authors":"Tahani M. Allam, A. Sallam, H. M. Abdullkader","doi":"10.1109/INFOS.2014.7036678","DOIUrl":null,"url":null,"abstract":"N-grams are a building block in natural language processing and information retrieval. It is a sequence of a string data like contiguous words or other tokens in text documents. In this work, we study how N-gram can be computed efficiently using a MapReduce for distributed data processing and a distributed database named Hbase This technique is applied to construct the training and testing processes using Hadoop MapReduce framework and Hbase. We proposed to focus on the time cost and storage size of the model and exploring different structures of Hbase table. By constructing and comparing a different table structures on training 100 million words for unigram, bigram and trigram models we suggest a table based on half ngram structure is a more suitable choice for distributed language model. The results of this work can be applied in the cloud computing and other large scale distributed language processing areas.","PeriodicalId":394058,"journal":{"name":"2014 9th International Conference on Informatics and Systems","volume":"110 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 9th International Conference on Informatics and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOS.2014.7036678","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
N-grams are a building block in natural language processing and information retrieval. It is a sequence of a string data like contiguous words or other tokens in text documents. In this work, we study how N-gram can be computed efficiently using a MapReduce for distributed data processing and a distributed database named Hbase This technique is applied to construct the training and testing processes using Hadoop MapReduce framework and Hbase. We proposed to focus on the time cost and storage size of the model and exploring different structures of Hbase table. By constructing and comparing a different table structures on training 100 million words for unigram, bigram and trigram models we suggest a table based on half ngram structure is a more suitable choice for distributed language model. The results of this work can be applied in the cloud computing and other large scale distributed language processing areas.