当前位置: X-MOL 学术arXiv.cs.SE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms
arXiv - CS - Software Engineering Pub Date : 2020-11-10 , DOI: arxiv-2011.04654
Yixue Zhao, Siwei Yin, Adriana Sejfia, Marcelo Schmitt Laser, Haoyu Wang, Nenad Medvidovic

Prefetching web pages is a well-studied solution to reduce network latency by predicting users' future actions based on their past behaviors. However, such techniques are largely unexplored on mobile platforms. Today's privacy regulations make it infeasible to explore prefetching with the usual strategy of amassing large amounts of data over long periods and constructing conventional, "large" prediction models. Our work is based on the observation that this may not be necessary: Given previously reported mobile-device usage trends (e.g., repetitive behaviors in brief bursts), we hypothesized that prefetching should work effectively with "small" models trained on mobile-user requests collected during much shorter time periods. To test this hypothesis, we constructed a framework for automatically assessing prediction models, and used it to conduct an extensive empirical study based on over 15 million HTTP requests collected from nearly 11,500 mobile users during a 24-hour period, resulting in over 7 million models. Our results demonstrate the feasibility of prefetching with small models on mobile platforms, directly motivating future work in this area. We further introduce several strategies for improving prediction models while reducing the model size. Finally, our framework provides the foundation for future explorations of effective prediction models across a range of usage scenarios.

中文翻译:

评估 Web 请求预测模型在移动平台上的可行性

预取网页是一种经过充分研究的解决方案,可通过根据用户过去的行为预测用户未来的行为来减少网络延迟。然而,此类技术在移动平台上很大程度上尚未得到探索。今天的隐私法规使得通过长期积累大量数据和构建传统的“大型”预测模型的常用策略来探索预取是不可行的。我们的工作基于观察到这可能没有必要:鉴于先前报告的移动设备使用趋势(例如,短暂爆发中的重复行为),我们假设预取应该与在移动用户请求上训练的“小”模型一起有效地工作在更短的时间内收集。为了检验这个假设,我们构建了一个自动评估预测模型的框架,并基于 24 小时内从近 11500 名移动用户收集的超过 1500 万次 HTTP 请求,使用它进行了广泛的实证研究,从而产生了超过 700 万个模型。我们的结果证明了在移动平台上使用小模型进行预取的可行性,直接推动了该领域的未来工作。我们进一步介绍了几种改进预测模型同时减小模型大小的策略。最后,我们的框架为未来在一系列使用场景中探索有效预测模型提供了基础。我们的结果证明了在移动平台上使用小模型进行预取的可行性,直接推动了该领域的未来工作。我们进一步介绍了几种改进预测模型同时减小模型大小的策略。最后,我们的框架为未来在一系列使用场景中探索有效预测模型提供了基础。我们的结果证明了在移动平台上使用小模型进行预取的可行性,直接推动了该领域的未来工作。我们进一步介绍了几种改进预测模型同时减小模型大小的策略。最后,我们的框架为未来在一系列使用场景中探索有效预测模型提供了基础。
更新日期:2020-11-11
down
wechat
bug