Reinforcement Learning Based Approaches to Adaptive Context Caching in Distributed Context Management Systems

IF 3.5 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Shakthi Weerasinghe, A. Zaslavsky, S. W. Loke, A. Medvedev, A. Abken, Alireza Hassani, Guang-Li Huang
{"title":"Reinforcement Learning Based Approaches to Adaptive Context Caching in Distributed Context Management Systems","authors":"Shakthi Weerasinghe, A. Zaslavsky, S. W. Loke, A. Medvedev, A. Abken, Alireza Hassani, Guang-Li Huang","doi":"10.1145/3648571","DOIUrl":null,"url":null,"abstract":"Real-time applications increasingly rely on context information to provide relevant and dependable features. Context queries require large-scale retrieval, inferencing, aggregation, and delivery of context using only limited computing resources, especially in a distributed environment. If this is slow, inconsistent, and too expensive to access context information, the dependability and relevancy of real-time applications may fail to exist. This paper argues, transiency of context (i.e., the limited validity period), variations in the features of context query loads (e.g., the request rate, different Quality of Service (QoS), and Quality of Context (QoC) requirements), and lack of prior knowledge about context to make near real-time adaptations as fundamental challenges that need to be addressed to overcome these shortcomings. Hence, we propose a performance metric driven reinforcement learning based adaptive context caching approach aiming to maximize both cost- and performance-efficiency for middleware-based Context Management Systems (CMSs). Although context-aware caching has been thoroughly investigated in the literature, our approach is novel because existing techniques are not fully applicable to caching context due to (i) the underlying fundamental challenges and (ii) not addressing the limitations hindering dependability and consistency of context. Unlike previously tested modes of CMS operations and traditional data caching techniques, our approach can provide real-time pervasive applications with lower cost, faster, and fresher high quality context information. Compared to existing context-aware data caching algorithms, our technique is bespoken for caching context information, which is different from traditional data. We also show that our full-cycle context lifecycle-based approach can maximize both cost- and performance-efficiency while maintaining adequate QoC solely based on real-time performance metrics and our heuristic techniques without depending on any previous knowledge about the context, variations in query features, or quality demands, unlike any previous work. We demonstrate using a real world inspired scenario and a prototype middleware based CMS integrated with our adaptive context caching approach that we have implemented, how realtime applications that are 85% faster can be more relevant and dependable to users, while costing 60.22% less than using existing techniques to access context information. Our model is also at least twice as fast and more flexible to adapt compared to existing benchmarks even under uncertainty and lack of prior knowledge about context, transiency, and variable context query loads.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":3.5000,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Internet of Things","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3648571","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Real-time applications increasingly rely on context information to provide relevant and dependable features. Context queries require large-scale retrieval, inferencing, aggregation, and delivery of context using only limited computing resources, especially in a distributed environment. If this is slow, inconsistent, and too expensive to access context information, the dependability and relevancy of real-time applications may fail to exist. This paper argues, transiency of context (i.e., the limited validity period), variations in the features of context query loads (e.g., the request rate, different Quality of Service (QoS), and Quality of Context (QoC) requirements), and lack of prior knowledge about context to make near real-time adaptations as fundamental challenges that need to be addressed to overcome these shortcomings. Hence, we propose a performance metric driven reinforcement learning based adaptive context caching approach aiming to maximize both cost- and performance-efficiency for middleware-based Context Management Systems (CMSs). Although context-aware caching has been thoroughly investigated in the literature, our approach is novel because existing techniques are not fully applicable to caching context due to (i) the underlying fundamental challenges and (ii) not addressing the limitations hindering dependability and consistency of context. Unlike previously tested modes of CMS operations and traditional data caching techniques, our approach can provide real-time pervasive applications with lower cost, faster, and fresher high quality context information. Compared to existing context-aware data caching algorithms, our technique is bespoken for caching context information, which is different from traditional data. We also show that our full-cycle context lifecycle-based approach can maximize both cost- and performance-efficiency while maintaining adequate QoC solely based on real-time performance metrics and our heuristic techniques without depending on any previous knowledge about the context, variations in query features, or quality demands, unlike any previous work. We demonstrate using a real world inspired scenario and a prototype middleware based CMS integrated with our adaptive context caching approach that we have implemented, how realtime applications that are 85% faster can be more relevant and dependable to users, while costing 60.22% less than using existing techniques to access context information. Our model is also at least twice as fast and more flexible to adapt compared to existing benchmarks even under uncertainty and lack of prior knowledge about context, transiency, and variable context query loads.
基于强化学习的分布式情境管理系统自适应情境缓存方法
实时应用越来越依赖上下文信息来提供相关和可靠的功能。上下文查询需要利用有限的计算资源进行大规模的检索、推理、聚合和上下文交付,尤其是在分布式环境中。如果获取上下文信息的过程缓慢、不一致且成本过高,那么实时应用的可靠性和相关性就可能不复存在。本文认为,上下文的瞬时性(即有限的有效期)、上下文查询负载特征的变化(如请求率、不同的服务质量(QoS)和上下文质量(QoC)要求),以及缺乏对上下文的事先了解以进行近乎实时的调整,是克服这些缺点需要应对的基本挑战。因此,我们提出了一种基于性能指标驱动的强化学习自适应上下文缓存方法,旨在最大限度地提高基于中间件的上下文管理系统(CMS)的成本和性能效率。虽然文献中已经对上下文感知缓存进行了深入研究,但我们的方法是新颖的,因为现有技术并不完全适用于缓存上下文,原因在于:(i) 基本挑战;(ii) 没有解决阻碍上下文可靠性和一致性的限制因素。与之前测试过的 CMS 操作模式和传统数据缓存技术不同,我们的方法可以为实时普适应用提供更低成本、更快速度和更新鲜的高质量上下文信息。与现有的上下文感知数据缓存算法相比,我们的技术更适合缓存不同于传统数据的上下文信息。我们还表明,我们基于全周期上下文生命周期的方法可以最大限度地提高成本和性能效率,同时仅根据实时性能指标和我们的启发式技术就能保持足够的质量保证,而无需依赖任何有关上下文、查询特征变化或质量要求的先前知识,这与之前的任何工作都不同。我们使用一个真实世界的灵感场景和一个基于中间件的 CMS 原型,并集成了我们实施的自适应上下文缓存方法,展示了如何使实时应用的速度提高 85%,从而提高用户的相关性和可靠性,同时比使用现有技术获取上下文信息的成本低 60.22%。与现有基准相比,即使在不确定和缺乏有关上下文、瞬时性和可变上下文查询负载的先验知识的情况下,我们的模型也至少快两倍,而且适应性更灵活。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.20
自引率
3.70%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信