当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Does Knowledge Distillation Really Work?
arXiv - CS - Machine Learning Pub Date : 2021-06-10 , DOI: arxiv-2106.05945
Samuel Stanton, Pavel Izmailov, Polina Kirichenko, Alexander A. Alemi, Andrew Gordon Wilson

Knowledge distillation is a popular technique for training a small student network to emulate a larger teacher model, such as an ensemble of networks. We show that while knowledge distillation can improve student generalization, it does not typically work as it is commonly understood: there often remains a surprisingly large discrepancy between the predictive distributions of the teacher and the student, even in cases when the student has the capacity to perfectly match the teacher. We identify difficulties in optimization as a key reason for why the student is unable to match the teacher. We also show how the details of the dataset used for distillation play a role in how closely the student matches the teacher -- and that more closely matching the teacher paradoxically does not always lead to better student generalization.

中文翻译:

知识蒸馏真的有效吗?

知识蒸馏是一种流行的技术,用于训练小型学生网络以模拟较大的教师模型,例如网络集成。我们表明,虽然知识提炼可以提高学生的泛化能力,但它通常不会像人们普遍理解的那样发挥作用:即使在学生有能力的情况下,教师和学生的预测分布之间往往仍然存在惊人的巨大差异。完全符合老师。我们将优化中的困难确定为学生无法匹配老师的关键原因。我们还展示了用于蒸馏的数据集的细节如何在学生与老师的匹配程度方面发挥作用 - 矛盾的是,与老师的匹配程度越高,并不总是能更好地概括学生。
更新日期:2021-06-11
down
wechat
bug