当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Scene-adaptive Knowledge Distillation for Sequential Recommendation via Differentiable Architecture Search
arXiv - CS - Information Retrieval Pub Date : 2021-07-15 , DOI: arxiv-2107.07173
Lei Chen, Fajie Yuan, Jiaxi Yang, Min Yang, Chengming Li

Sequential recommender systems (SRS) have become a research hotspot due to its power in modeling user dynamic interests and sequential behavioral patterns. To maximize model expressive ability, a default choice is to apply a larger and deeper network architecture, which, however, often brings high network latency when generating online recommendations. Naturally, we argue that compressing the heavy recommendation models into middle- or light- weight neural networks is of great importance for practical production systems. To realize such a goal, we propose AdaRec, a knowledge distillation (KD) framework which compresses knowledge of a teacher model into a student model adaptively according to its recommendation scene by using differentiable Neural Architecture Search (NAS). Specifically, we introduce a target-oriented distillation loss to guide the structure search process for finding the student network architecture, and a cost-sensitive loss as constraints for model size, which achieves a superior trade-off between recommendation effectiveness and efficiency. In addition, we leverage Earth Mover's Distance (EMD) to realize many-to-many layer mapping during knowledge distillation, which enables each intermediate student layer to learn from other intermediate teacher layers adaptively. Extensive experiments on real-world recommendation datasets demonstrate that our model achieves competitive or better accuracy with notable inference speedup comparing to strong counterparts, while discovering diverse neural architectures for sequential recommender models under different recommendation scenes.

中文翻译:

基于可微架构搜索的序列推荐的场景自适应知识提炼

顺序推荐系统(SRS)由于其在建模用户动态兴趣和顺序行为模式方面的能力而成为研究热点。为了最大化模型表达能力,默认选择是应用更大更深的网络架构,然而,这在生成在线推荐时往往会带来高网络延迟。自然,我们认为将重推荐模型压缩为中等或轻量级神经网络对于实际生产系统非常重要。为了实现这一目标,我们提出了 AdaRec,这是一种知识蒸馏 (KD) 框架,它通过使用可微神经架构搜索 (NAS) 根据其推荐场景自适应地将教师模型的知识压缩为学生模型。具体来说,我们引入了面向目标的蒸馏损失来指导结构搜索过程以找到学生网络架构,并引入成本敏感损失作为模型大小的约束,从而在推荐有效性和效率之间实现了卓越的权衡。此外,我们利用 Earth Mover's Distance (EMD) 在知识蒸馏过程中实现多对多层映射,这使得每个中间学生层能够自适应地向其他中间教师层学习。对真实世界推荐数据集的大量实验表明,与强大的对应模型相比,我们的模型实现了具有竞争力或更好的准确性,推理速度显着提高,同时发现了不同推荐场景下顺序推荐模型的不同神经架构。
更新日期:2021-07-16
down
wechat
bug