当前位置: X-MOL 学术ACM J. Emerg. Technol. Comput. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Quit When You Can: Efficient Evaluation of Ensembles by Optimized Ordering
ACM Journal on Emerging Technologies in Computing Systems ( IF 2.2 ) Pub Date : 2021-07-14 , DOI: 10.1145/3451209
Serena Wang 1 , Maya Gupta 1 , Seungil You 2
Affiliation  

Given a classifier ensemble and a dataset, many examples may be confidently and accurately classified after only a subset of the base models in the ensemble is evaluated. Dynamically deciding to classify early can reduce both mean latency and CPU without harming the accuracy of the original ensemble. To achieve such gains, we propose jointly optimizing the evaluation order of the base models and early-stopping thresholds. Our proposed objective is a combinatorial optimization problem, but we provide a greedy algorithm that achieves a 4-approximation of the optimal solution under certain assumptions, which is also the best achievable polynomial-time approximation bound. Experiments on benchmark and real-world problems show that the proposed Quit When You Can (QWYC) algorithm can speed up average evaluation time by 1.8–2.7 times on even jointly trained ensembles, which are more difficult to speed up than independently or sequentially trained ensembles. QWYC’s joint optimization of ordering and thresholds also performed better in experiments than previous fixed orderings, including gradient boosted trees’ ordering.

中文翻译:

尽可能退出:通过优化排序对整体进行有效评估

给定一个分类器集成和一个数据集,在仅评估集成中的一个基本模型子集之后,许多示例可以被自信且准确地分类。动态决定及早分类可以减少平均延迟和 CPU,而不会损害原始集成的准确性。为了实现这样的收益,我们建议联合优化基础模型的评估顺序和早期停止阈值。我们提出的目标是一个组合优化问题,但我们提供了一种贪心算法,它在某些假设下实现了最优解的 4 次逼近,这也是可实现的最佳多项式时间逼近界。对基准和现实世界问题的实验表明,所提出的尽可能退出(QWYC)即使是联合训练的集成,算法也可以将平均评估时间加快 1.8-2.7 倍,这比独立或顺序训练的集成更难加速。QWYC 对排序和阈值的联合优化在实验中也比以前的固定排序表现更好,包括梯度提升树的排序。
更新日期:2021-07-14
down
wechat
bug