Gremlin: scheduling interactions in vehicular computing

Kyungmin Lee, J. Flinn, Brian D. Noble
{"title":"Gremlin: scheduling interactions in vehicular computing","authors":"Kyungmin Lee, J. Flinn, Brian D. Noble","doi":"10.1145/3132211.3134450","DOIUrl":null,"url":null,"abstract":"Vehicular applications must not demand too much of a driver's attention. They often run in the background and initiate interactions with the driver to deliver important information. We argue that the vehicular computing system must schedule interactions by considering their priority, the attention they will demand, and how much attention the driver currently has to spare. Based on these considerations, it should either allow a given interaction or defer it. We describe a prototype called Gremlin that leverages edge computing infrastructure to help schedule interactions initiated by vehicular applications. It continuously performs four tasks: (1) monitoring driving conditions to estimate the driver's available attention, (2) recording interactions for analysis, (3) generating a user-specific quantitative model of the attention required for each distinct interaction, and (4) scheduling new interactions based on the above data. Gremlin performs the third task on edge computing infrastructure. Offload is attractive because the analysis is too computationally demanding to run on vehicular platforms. Since recording size for each interaction can be large, it is preferable to perform the offloaded computation at the edge of the network rather than in the cloud, and thereby conserve wide-area network bandwidth. We evaluate Gremlin by comparing its decisions to those recommended by a vehicular UI expert. Gremlin's decisions agree with the expert's over 90% of the time, much more frequently than the coarse-grained scheduling policies used by current vehicle systems. Further, we find that offloading of analysis to edge platforms reduces use of wide-area networks by an average of 15MB per analyzed interaction.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3132211.3134450","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 19

Abstract

Vehicular applications must not demand too much of a driver's attention. They often run in the background and initiate interactions with the driver to deliver important information. We argue that the vehicular computing system must schedule interactions by considering their priority, the attention they will demand, and how much attention the driver currently has to spare. Based on these considerations, it should either allow a given interaction or defer it. We describe a prototype called Gremlin that leverages edge computing infrastructure to help schedule interactions initiated by vehicular applications. It continuously performs four tasks: (1) monitoring driving conditions to estimate the driver's available attention, (2) recording interactions for analysis, (3) generating a user-specific quantitative model of the attention required for each distinct interaction, and (4) scheduling new interactions based on the above data. Gremlin performs the third task on edge computing infrastructure. Offload is attractive because the analysis is too computationally demanding to run on vehicular platforms. Since recording size for each interaction can be large, it is preferable to perform the offloaded computation at the edge of the network rather than in the cloud, and thereby conserve wide-area network bandwidth. We evaluate Gremlin by comparing its decisions to those recommended by a vehicular UI expert. Gremlin's decisions agree with the expert's over 90% of the time, much more frequently than the coarse-grained scheduling policies used by current vehicle systems. Further, we find that offloading of analysis to edge platforms reduces use of wide-area networks by an average of 15MB per analyzed interaction.
Gremlin:车辆计算中的交互调度
车辆应用不能要求驾驶员过多的注意力。它们通常在后台运行,并启动与驾驶员的交互,以传递重要信息。我们认为,车载计算系统必须通过考虑交互的优先级、需要的注意力以及驾驶员目前可以腾出的注意力来安排交互。基于这些考虑,它应该允许给定的交互,或者推迟它。我们描述了一个名为Gremlin的原型,它利用边缘计算基础设施来帮助调度车辆应用程序发起的交互。它连续执行四项任务:(1)监测驾驶条件以估计驾驶员的可用注意力;(2)记录交互以进行分析;(3)生成针对每个不同交互所需注意力的用户特定定量模型;(4)基于上述数据调度新的交互。Gremlin在边缘计算基础设施上执行第三项任务。卸载是有吸引力的,因为这种分析对计算的要求太高,无法在车载平台上运行。由于每次交互的记录大小可能很大,因此最好在网络边缘而不是在云中执行卸载计算,从而节省广域网络带宽。我们通过将Gremlin的决策与车辆UI专家推荐的决策进行比较来评估Gremlin。Gremlin的决策在90%以上的时间内与专家的决策一致,比当前车辆系统使用的粗粒度调度策略要频繁得多。此外,我们发现将分析卸载到边缘平台可以减少广域网的使用,每个分析的交互平均减少15MB。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信