{"title":"Syntax Analyzer & Selectivity Estimation Technique Applied on Wikipedia XML Data Set","authors":"M. Alrammal, G. Hains","doi":"10.1109/DeSE.2013.10","DOIUrl":null,"url":null,"abstract":"Querying large volume of XML data represents a bottleneck for several computationally intensive applications. A fast and accurate selectivity estimation mechanism is of practical importance because selectivity estimation plays a fundamental role in XML query performance. Recently proposed techniques are all based on some forms of structure synopses that could be time consuming to build and not effective for summarizing complex structure relationships. Precisely, current techniques do not handle or process efficiently the large text nodes exist in some data sets as Wikipedia. To overcome this limitation, we extend our previous work [12] that is a stream-based selectivity estimation technique to process efficiently the English data set of Wikipedia. The content of XML text nodes in Wikipedia contains a massive amount of real-life information that our techniques bring closer to practical and efficient everyday use. Extensive experiments on Wikipedia data sets (with different sizes) show that our technique achieves a remarkable accuracy and reasonable performance.","PeriodicalId":248716,"journal":{"name":"2013 Sixth International Conference on Developments in eSystems Engineering","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Sixth International Conference on Developments in eSystems Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DeSE.2013.10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Querying large volume of XML data represents a bottleneck for several computationally intensive applications. A fast and accurate selectivity estimation mechanism is of practical importance because selectivity estimation plays a fundamental role in XML query performance. Recently proposed techniques are all based on some forms of structure synopses that could be time consuming to build and not effective for summarizing complex structure relationships. Precisely, current techniques do not handle or process efficiently the large text nodes exist in some data sets as Wikipedia. To overcome this limitation, we extend our previous work [12] that is a stream-based selectivity estimation technique to process efficiently the English data set of Wikipedia. The content of XML text nodes in Wikipedia contains a massive amount of real-life information that our techniques bring closer to practical and efficient everyday use. Extensive experiments on Wikipedia data sets (with different sizes) show that our technique achieves a remarkable accuracy and reasonable performance.