The Storage System for a Multimedia Data Manager Kernel

C. R. Valêncio, F. Almeida, J. M. Machado, A. Colombini, L. A. Neves, Rogéria Cristiane Gratão de Souza
{"title":"The Storage System for a Multimedia Data Manager Kernel","authors":"C. R. Valêncio, F. Almeida, J. M. Machado, A. Colombini, L. A. Neves, Rogéria Cristiane Gratão de Souza","doi":"10.1109/PDCAT.2013.41","DOIUrl":null,"url":null,"abstract":"One way to boost the performance of a Database Management System (DBMS) is by fetching data in advance of their use, a technique known as prefetching. However, depending on the resource being used (file, disk partition, memory, etc.), the way prefetching is done might be different or even not necessary, forcing a DBMS to be aware of the underlying Storage System. In this paper we propose a Storage System that frees the DBMS of this task by exposing the database through a unique interface, no matter what kind of resource hosts it. We have implemented a file resource that recognizes and exploits sequential access patterns that emerge over time to prefetch adjacent blocks to the requested ones. Our approach is speculative because it considers past accesses, but it also considers hints from the upper layers of the DBMS, which must specify the access context in which a read operation takes place. The informed access context is then mapped to one of the available channels in the file resource, which is equipped with a set of internal buffers, one per channel, for the management of fetched and prefetched data. Prefetched data are moved to the main cache of the DBMS only if really requested by the application, which helps to avoid cache pollution. So, we slightly introduced a two level cache hierarchy without any intervention of the DBMS kernel. We ran the tests with different buffer settings and compared the results against the OBL policy, which showed that it is possible to get a read time up to two times faster in a highly concurrent environment without sacrificing the performance when the system is not under intensive workloads.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PDCAT.2013.41","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

One way to boost the performance of a Database Management System (DBMS) is by fetching data in advance of their use, a technique known as prefetching. However, depending on the resource being used (file, disk partition, memory, etc.), the way prefetching is done might be different or even not necessary, forcing a DBMS to be aware of the underlying Storage System. In this paper we propose a Storage System that frees the DBMS of this task by exposing the database through a unique interface, no matter what kind of resource hosts it. We have implemented a file resource that recognizes and exploits sequential access patterns that emerge over time to prefetch adjacent blocks to the requested ones. Our approach is speculative because it considers past accesses, but it also considers hints from the upper layers of the DBMS, which must specify the access context in which a read operation takes place. The informed access context is then mapped to one of the available channels in the file resource, which is equipped with a set of internal buffers, one per channel, for the management of fetched and prefetched data. Prefetched data are moved to the main cache of the DBMS only if really requested by the application, which helps to avoid cache pollution. So, we slightly introduced a two level cache hierarchy without any intervention of the DBMS kernel. We ran the tests with different buffer settings and compared the results against the OBL policy, which showed that it is possible to get a read time up to two times faster in a highly concurrent environment without sacrificing the performance when the system is not under intensive workloads.
多媒体数据管理器内核的存储系统
提高数据库管理系统(DBMS)性能的一种方法是在使用数据之前获取数据,这种技术称为预获取。然而,根据所使用的资源(文件、磁盘分区、内存等)的不同,预取的方式可能会有所不同,甚至是不必要的,这会迫使DBMS知道底层存储系统。在本文中,我们提出了一个存储系统,它通过一个唯一的接口暴露数据库,从而使DBMS从这个任务中解脱出来,而不管它是由什么样的资源承载的。我们已经实现了一个文件资源,它可以识别并利用随着时间的推移而出现的顺序访问模式,将相邻块预取到请求的块上。我们的方法是推测性的,因为它考虑了过去的访问,但它也考虑了来自DBMS上层的提示,这些提示必须指定读取操作发生的访问上下文。然后将知情访问上下文映射到文件资源中的一个可用通道,该通道配备了一组内部缓冲区,每个通道一个,用于管理已获取和预获取的数据。预取的数据只有在应用程序真正请求时才会移动到DBMS的主缓存中,这有助于避免缓存污染。因此,我们稍微引入了一个两级缓存层次结构,没有任何DBMS内核的干预。我们使用不同的缓冲区设置运行测试,并将结果与OBL策略进行比较,结果表明,在高度并发的环境中,读取时间可以提高两倍,而不会牺牲系统在非密集工作负载下的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信