使用小枝缓存的首选项查询过程的有效评估

Wolf-Tilo Balke, SungRan Cho
{"title":"使用小枝缓存的首选项查询过程的有效评估","authors":"Wolf-Tilo Balke, SungRan Cho","doi":"10.1109/RCIS.2009.5089300","DOIUrl":null,"url":null,"abstract":"Facing today's information flood needs efficient means for personalization. Therefore XML query processing over large volumes of data needs to make the most out of already spent processing time by caching common (sub)expressions for reuse. This is especially promising for the new paradigm of personalized preference queries. Here a sequence of possible query relaxations is a-priori determined by the users' preferences. Structural and value-based preferences thus define a query process where predicates are progressively relaxed until a suitable set of best possible results has been retrieved. To improve evaluation times for such query processes we argue that caching intermediate join results of structural preference queries is especially effective, because subsequent queries will always be subsumed by some previously cached queries to a certain extent. In this paper we propose a structural join-based caching scheme that allows preference queries to reuse the most beneficial structural join results of all previous queries. We first design a twig cache along with effective strategies for cache management. Moreover, we present a selection algorithm for join orders using cached data and the preference-induced sequence of future queries to select optimal query evaluation plans. Our benchmark experiments show that by using our twig caches preference query processing can be essentially sped up.","PeriodicalId":180106,"journal":{"name":"2009 Third International Conference on Research Challenges in Information Science","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Efficient evaluation of preference query processes using twig caches\",\"authors\":\"Wolf-Tilo Balke, SungRan Cho\",\"doi\":\"10.1109/RCIS.2009.5089300\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Facing today's information flood needs efficient means for personalization. Therefore XML query processing over large volumes of data needs to make the most out of already spent processing time by caching common (sub)expressions for reuse. This is especially promising for the new paradigm of personalized preference queries. Here a sequence of possible query relaxations is a-priori determined by the users' preferences. Structural and value-based preferences thus define a query process where predicates are progressively relaxed until a suitable set of best possible results has been retrieved. To improve evaluation times for such query processes we argue that caching intermediate join results of structural preference queries is especially effective, because subsequent queries will always be subsumed by some previously cached queries to a certain extent. In this paper we propose a structural join-based caching scheme that allows preference queries to reuse the most beneficial structural join results of all previous queries. We first design a twig cache along with effective strategies for cache management. Moreover, we present a selection algorithm for join orders using cached data and the preference-induced sequence of future queries to select optimal query evaluation plans. Our benchmark experiments show that by using our twig caches preference query processing can be essentially sped up.\",\"PeriodicalId\":180106,\"journal\":{\"name\":\"2009 Third International Conference on Research Challenges in Information Science\",\"volume\":\"36 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 Third International Conference on Research Challenges in Information Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RCIS.2009.5089300\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 Third International Conference on Research Challenges in Information Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RCIS.2009.5089300","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

面对信息泛滥的今天,需要有效的个性化手段。因此,对大量数据的XML查询处理需要通过缓存公共(子)表达式以供重用来充分利用已经花费的处理时间。这对于个性化偏好查询的新范例来说尤其有希望。这里,一系列可能的查询松弛是由用户的首选项先验地确定的。因此,结构首选项和基于值的首选项定义了一个查询过程,其中谓词逐渐放松,直到检索到一组合适的最佳结果。为了改进这类查询过程的计算时间,我们认为缓存结构偏好查询的中间连接结果特别有效,因为后续查询总是在一定程度上被先前缓存的一些查询所包含。在本文中,我们提出了一种基于结构连接的缓存方案,该方案允许首选项查询重用所有先前查询中最有益的结构连接结果。我们首先设计了一个小枝缓存以及有效的缓存管理策略。此外,我们提出了一种使用缓存数据和未来查询的偏好诱导序列来选择最优查询评估计划的连接顺序选择算法。我们的基准测试实验表明,通过使用我们的小枝缓存,可以从根本上加快查询处理的速度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Efficient evaluation of preference query processes using twig caches
Facing today's information flood needs efficient means for personalization. Therefore XML query processing over large volumes of data needs to make the most out of already spent processing time by caching common (sub)expressions for reuse. This is especially promising for the new paradigm of personalized preference queries. Here a sequence of possible query relaxations is a-priori determined by the users' preferences. Structural and value-based preferences thus define a query process where predicates are progressively relaxed until a suitable set of best possible results has been retrieved. To improve evaluation times for such query processes we argue that caching intermediate join results of structural preference queries is especially effective, because subsequent queries will always be subsumed by some previously cached queries to a certain extent. In this paper we propose a structural join-based caching scheme that allows preference queries to reuse the most beneficial structural join results of all previous queries. We first design a twig cache along with effective strategies for cache management. Moreover, we present a selection algorithm for join orders using cached data and the preference-induced sequence of future queries to select optimal query evaluation plans. Our benchmark experiments show that by using our twig caches preference query processing can be essentially sped up.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信