移动平台上web请求预测模型的可行性评估

Yixue Zhao, Siwei Yin, Adriana Sejfia, Marcelo Schmitt Laser, Haoyu Wang, N. Medvidović
{"title":"移动平台上web请求预测模型的可行性评估","authors":"Yixue Zhao, Siwei Yin, Adriana Sejfia, Marcelo Schmitt Laser, Haoyu Wang, N. Medvidović","doi":"10.1109/MobileSoft52590.2021.00008","DOIUrl":null,"url":null,"abstract":"Prefetching web pages is a well-studied solution to reduce network latency by predicting users’ future actions based on their past behaviors. However, such techniques are largely unexplored on mobile platforms. Today’s privacy regulations make it infeasible to explore prefetching with the usual strategy of amassing large amounts of data over long periods and constructing conventional, \"large\" prediction models. Our work is based on the observation that this may not be necessary: Given previously reported mobile-device usage trends (e.g., repetitive behaviors in brief bursts), we hypothesized that prefetching should work effectively with \"small\" models trained on mobile-user requests collected during much shorter time periods. To test this hypothesis, we constructed a framework for automatically assessing prediction models, and used it to conduct an extensive empirical study based on over 15 million HTTP requests collected from nearly 11,500 mobile users during a 24-hour period, resulting in over 7 million models. Our results demonstrate the feasibility of prefetching with small models on mobile platforms, directly motivating future work in this area. We further introduce several strategies for improving prediction models while reducing the model size. Finally, our framework provides the foundation for future explorations of effective prediction models across a range of usage scenarios.","PeriodicalId":257528,"journal":{"name":"2021 IEEE/ACM 8th International Conference on Mobile Software Engineering and Systems (MobileSoft)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms\",\"authors\":\"Yixue Zhao, Siwei Yin, Adriana Sejfia, Marcelo Schmitt Laser, Haoyu Wang, N. Medvidović\",\"doi\":\"10.1109/MobileSoft52590.2021.00008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Prefetching web pages is a well-studied solution to reduce network latency by predicting users’ future actions based on their past behaviors. However, such techniques are largely unexplored on mobile platforms. Today’s privacy regulations make it infeasible to explore prefetching with the usual strategy of amassing large amounts of data over long periods and constructing conventional, \\\"large\\\" prediction models. Our work is based on the observation that this may not be necessary: Given previously reported mobile-device usage trends (e.g., repetitive behaviors in brief bursts), we hypothesized that prefetching should work effectively with \\\"small\\\" models trained on mobile-user requests collected during much shorter time periods. To test this hypothesis, we constructed a framework for automatically assessing prediction models, and used it to conduct an extensive empirical study based on over 15 million HTTP requests collected from nearly 11,500 mobile users during a 24-hour period, resulting in over 7 million models. Our results demonstrate the feasibility of prefetching with small models on mobile platforms, directly motivating future work in this area. We further introduce several strategies for improving prediction models while reducing the model size. Finally, our framework provides the foundation for future explorations of effective prediction models across a range of usage scenarios.\",\"PeriodicalId\":257528,\"journal\":{\"name\":\"2021 IEEE/ACM 8th International Conference on Mobile Software Engineering and Systems (MobileSoft)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE/ACM 8th International Conference on Mobile Software Engineering and Systems (MobileSoft)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MobileSoft52590.2021.00008\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/ACM 8th International Conference on Mobile Software Engineering and Systems (MobileSoft)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MobileSoft52590.2021.00008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

预取网页是一个经过充分研究的解决方案,通过预测用户过去的行为来减少网络延迟。然而,这些技术在移动平台上还未得到充分开发。如今的隐私法规使得通过在长时间内积累大量数据并构建传统的“大型”预测模型来探索预取的通常策略变得不可行。我们的工作是基于这样的观察,即这可能是不必要的:考虑到之前报道的移动设备使用趋势(例如,在短暂的爆发中重复的行为),我们假设预取应该有效地与“小”模型一起工作,这些模型训练了在更短的时间内收集的移动用户请求。为了验证这一假设,我们构建了一个用于自动评估预测模型的框架,并使用它进行了广泛的实证研究,该研究基于在24小时内从近11,500个移动用户收集的超过1500万个HTTP请求,产生了超过700万个模型。我们的研究结果证明了在移动平台上使用小型模型预取的可行性,直接推动了该领域的未来工作。我们进一步介绍了几种改进预测模型同时减小模型尺寸的策略。最后,我们的框架为未来在一系列使用场景中探索有效的预测模型提供了基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms
Prefetching web pages is a well-studied solution to reduce network latency by predicting users’ future actions based on their past behaviors. However, such techniques are largely unexplored on mobile platforms. Today’s privacy regulations make it infeasible to explore prefetching with the usual strategy of amassing large amounts of data over long periods and constructing conventional, "large" prediction models. Our work is based on the observation that this may not be necessary: Given previously reported mobile-device usage trends (e.g., repetitive behaviors in brief bursts), we hypothesized that prefetching should work effectively with "small" models trained on mobile-user requests collected during much shorter time periods. To test this hypothesis, we constructed a framework for automatically assessing prediction models, and used it to conduct an extensive empirical study based on over 15 million HTTP requests collected from nearly 11,500 mobile users during a 24-hour period, resulting in over 7 million models. Our results demonstrate the feasibility of prefetching with small models on mobile platforms, directly motivating future work in this area. We further introduce several strategies for improving prediction models while reducing the model size. Finally, our framework provides the foundation for future explorations of effective prediction models across a range of usage scenarios.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信