当前位置: X-MOL 学术IEEE ACM Trans. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
OnDisc: Online Latency-Sensitive Job Dispatching and Scheduling in Heterogeneous Edge-Clouds
IEEE/ACM Transactions on Networking ( IF 3.0 ) Pub Date : 2019-11-28 , DOI: 10.1109/tnet.2019.2953806
Zhenhua Han , Haisheng Tan , Xiang-Yang Li , Shaofeng H.-C. Jiang , Yupeng Li , Francis C. M. Lau

In edge-cloud computing, a set of servers (called edge servers) are deployed near the mobile devices to allow these devices to offload their jobs to and subsequently obtain their results from the edge servers with low latency. One fundamental problem in edge-cloud systems is how to dispatch and schedule the jobs so that the job response time (defined as the interval between the release of the job and the arrival of the computation result at the device) is minimized. In this paper, we propose a general model for this problem, where the jobs are generated in arbitrary order and at arbitrary times at the mobile devices and then offloaded to servers with both upload and download delays. Our goal is to minimize the total weighted response time of all the jobs. The weight is set based on how latency-sensitive the job is. We derive the first online job dispatching and scheduling algorithm in edge-clouds, called OnDisc , which is scalable in the speed augmentation model; that is, OnDisc is $(1 + \varepsilon )$ -speed $O(1/\varepsilon )$ -competitive for any small constant $\varepsilon >0$ . Moreover, OnDisc can be easily implemented in distributed systems. We also extend OnDisc with a fairness knob to incorporate the trade-off between the average job response time and the degree of fairness among jobs. Extensive simulations based on a real-world data-trace from Google show that OnDisc can reduce the total weighted response time dramatically compared with heuristic algorithms.

中文翻译:

OnDisc:异构边缘云中的在线延迟敏感的作业调度和计划

在边缘云计算中,在移动设备附近部署了一组服务器(称为边缘服务器),以允许这些设备将其工作分流到边缘服务器,并随后以低延迟从边缘服务器获取结果。边缘云系统中的一个基本问题是如何调度和调度作业,以使作业响应时间(定义为作业释放与计算结果到达设备之间的间隔)最小化。在本文中,我们针对此问题提出了一个通用模型,其中,作业在移动设备上以任意顺序和任意时间生成,然后以上载和下载延迟的方式分流到服务器。我们的目标是最大程度地减少所有作业的总加权响应时间。根据作业对延迟的敏感程度来设置权重。光盘 ,这是 可扩展的在增速模型中 那是,光盘 $(1 + \ varepsilon)$ -速度 $ O(1 / \ varepsilon)$ -竞争任何小常数 $ \ varepsilon> 0 $ 。此外,光盘可以在分布式系统中轻松实现。我们还扩展光盘带有“公平性”旋钮,可以纳入平均工作响应时间与工作之间的公平程度之间的权衡。根据来自Google的真实数据跟踪的广泛模拟显示,光盘 与启发式算法相比,可以大大减少总加权响应时间。
更新日期:2020-01-04
down
wechat
bug