当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
KD-Lib: A PyTorch library for Knowledge Distillation, Pruning and Quantization
arXiv - CS - Machine Learning Pub Date : 2020-11-30 , DOI: arxiv-2011.14691
Het Shah, Avishree Khare, Neelay Shah, Khizir Siddiqui

In recent years, the growing size of neural networks has led to a vast amount of research concerning compression techniques to mitigate the drawbacks of such large sizes. Most of these research works can be categorized into three broad families : Knowledge Distillation, Pruning, and Quantization. While there has been steady research in this domain, adoption and commercial usage of the proposed techniques has not quite progressed at the rate. We present KD-Lib, an open-source PyTorch based library, which contains state-of-the-art modular implementations of algorithms from the three families on top of multiple abstraction layers. KD-Lib is model and algorithm-agnostic, with extended support for hyperparameter tuning using Optuna and Tensorboard for logging and monitoring. The library can be found at - https://github.com/SforAiDl/KD_Lib.

中文翻译:

KD-Lib:一个用于知识蒸馏,修剪和量化的PyTorch库

近年来,神经网络的规模不断扩大,导致了有关压缩技术的大量研究,以减轻这种大尺寸的弊端。这些研究工作大部分可以分为三个大类:知识蒸馏,修剪和量化。尽管已经在该领域进行了稳定的研究,但是所提出的技术的采用和商业使用还没有以一定的速度取得进展。我们介绍KD-Lib,这是一个基于PyTorch的开源库,其中包含来自多个抽象层之上的三个系列的算法的最新模块化实现。KD-Lib与模型和算法无关,并且扩展了对使用Optuna和Tensorboard进行日志和监视的超参数调整的支持。该库位于-https://github.com/SforAiDl/KD_Lib。
更新日期:2020-12-01
down
wechat
bug