当前位置: X-MOL 学术Inform. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Novel fairness-aware co-scheduling for shared cache contention game on chip multiprocessors
Information Sciences ( IF 8.1 ) Pub Date : 2020-04-02 , DOI: 10.1016/j.ins.2020.03.078
Zheng Xiao , Liwen Chen , Bangyong Wang , Jiayi Du , Keqin Li

Threads running on different cores of chip multiprocessors (CMP) can cause thread performance degradation due to contention for shared resources such as shared L2 cache. Some studies have shown that thread co-scheduling can effectively reduce contention for shared resources. However, in a multi-core system with shared caches, mutual interference between threads is unpredictable. As the number of cores increases, we are unlikely to exhaust all possible co-scheduling schemes. In this paper, a novel fairness-aware thread co-scheduling algorithm base on non-cooperative game is proposed to reduce L2 cache misses. We tried to improve the overall performance of the system by scheduling threads fairly. The originality of this work is to model thread scheduling using a non-cooperative game. The execution time of a thread varies depending on which threads are running on other cores of the same chip, because different thread combinations result in different levels of cache contention. Given the interdependence and competition between threads on the CMP architecture, non-cooperative game is used to solve the problem of thread co-scheduling where each thread is considered as a participant in the game. An iterative algorithm (IA) is proposed to solve the Nash equilibrium of the non-cooperative game in this paper. Subsequently, it is theoretically proved that IA has a potential game process and finally proves that IA can converge to Nash equilibrium in N iterations, where N is the number of threads. The co-scheduling scheme of all threads is obtained by solving the Nash equilibrium of the IA. Finally, the convergence and effectiveness of IA proposed in this paper is verified by experiments. In addition, we use the cache partition to improve the performance of IA. Experimental results show that the number of total cache misses of IA is less than that of the default scheduling algorithm, IA combined with cache partitioning can further reduce the total cache misses.



中文翻译:

用于片上多处理器共享缓存争用游戏的新颖的公平感知协同调度

在芯片多处理器(CMP)的不同内核上运行的线程可能会由于争用共享资源(例如共享L2缓存)而导致线程性能下降。一些研究表明,线程协同调度可以有效减少共享资源的争用。但是,在具有共享缓存的多核系统中,线程之间的相互干扰是不可预测的。随着核心数量的增加,我们不太可能耗尽所有可能的协同调度方案。本文提出了一种基于非合作博弈的公平感知线程协同调度新算法,以减少二级缓存失误。我们试图通过公平地调度线程来提高系统的整体性能。这项工作的独创性是使用非合作游戏对线程调度进行建模。线程的执行时间取决于在同一芯片的其他内核上运行的线程,因为不同的线程组合会导致不同级别的缓存争用。考虑到CMP体系结构上线程之间的相互依赖性和竞争性,可以使用非合作游戏来解决线程协同调度的问题,其中每个线程都被视为游戏的参与者。迭代算法(为了解决非合作博弈的纳什均衡,本文提出了IA)。随后,从理论上证明IA具有潜在的博弈过程,最后证明IA可以在N次迭代中收敛到Nash平衡,其中N是线程数。通过解决IA的Nash平衡,可以得到所有线程的协同调度方案。最后,通过实验验证了本文提出的IA的收敛性和有效性。另外,我们使用缓存分区来提高IA的性能。实验结果表明, IA的总高速缓存未命中数与默认调度算法相比,IA与高速缓存分区相结合可以进一步减少总高速缓存未命中率。

更新日期:2020-04-02
down
wechat
bug