当前位置: X-MOL 学术Mobile Netw. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Word Embedding Quantization for Personalized Recommendation on Storage-Constrained Edge Devices in a Smart Store
Mobile Networks and Applications ( IF 3.8 ) Pub Date : 2021-01-04 , DOI: 10.1007/s11036-020-01710-4
Yao-Chung Fan , Si-Ying Huang , Yung-Yu Chen , Lun-Chi Chen , Fang-Yie Leu

In recent years, word embedding models receive tremendous research attentions due to their capability of capturing textual semantics. This study investigates the issue of employing word embedding models into storage-constrained edge devices for personalized item-of-interest recommendation in a smart store. The challenge lies in that the existing embedding models are often too large to fit into a storage-constrained edge device. One naive idea is to reside the word embedding model in a secondary storage and process recommendation with that storage. However, this idea suffers from the burden of additional traffics. To this end, we propose a framework called Word Embedding Quantization (WEQ) which constructs an index upon a given word embedding model and stores the index in the primary storage to enable the use of the word embedding model by edge devices. One challenge for using the index is that the exact user profile is no longer ensured. However, we find that there are opportunities for computing the correct recommendation results by knowing only an inexact user profile. In this paper, we propose a series of techniques that leverage the opportunities for computing candidates with the goal of minimizing the accessing cost to a secondary storage in edge devices. Experiments are performed to verify the efficiency of the proposed techniques, demonstrating the feasibility of the proposed framework.



中文翻译:

用于智能商店中受存储限制的边缘设备的个性化建议的词嵌入量化

近年来,词嵌入模型由于能够捕获文本语义而受到了广泛的研究关注。这项研究调查了将词嵌入模型应用到受存储限制的边缘设备中以在智能商店中进行个性化关注项推荐的问题。挑战在于现有的嵌入模型通常太大而无法容纳受存储限制的边缘设备。一个幼稚的想法是将单词嵌入模型驻留在辅助存储中,并通过该存储处理建议。但是,这种想法承受了额外交通的负担。为此,我们提出了一个称为单词嵌入量化(WEQ)的框架,该框架可在给定的单词嵌入模型上构建索引,并将索引存储在主存储器中,以使边缘设备能够使用单词嵌入模型。使用索引的一个挑战是不再确保确切的用户配置文件。但是,我们发现,仅了解不准确的用户个人资料,就有机会计算正确的推荐结果。在本文中,我们提出了一系列利用机会来计算候选对象的技术,目的是最大程度地减少边缘设备中二级存储的访问成本。进行实验以验证所提出技术的效率,证明所提出框架的可行性。使用索引的一个挑战是不再确保确切的用户配置文件。但是,我们发现,仅了解不准确的用户个人资料,就有机会计算正确的推荐结果。在本文中,我们提出了一系列利用机会来计算候选对象的技术,目的是最大程度地减少对边缘设备中辅助存储的访问成本。进行实验以验证所提出技术的效率,证明所提出框架的可行性。使用索引的一个挑战是不再确保确切的用户配置文件。但是,我们发现,仅了解不准确的用户个人资料,就有机会计算正确的推荐结果。在本文中,我们提出了一系列利用机会来计算候选对象的技术,目的是最大程度地减少对边缘设备中辅助存储的访问成本。进行实验以验证所提出技术的效率,证明所提出框架的可行性。我们提出了一系列利用计算候选者机会的技术,目的是最大程度地减少对边缘设备中辅助存储的访问成本。进行实验以验证所提出技术的效率,证明所提出框架的可行性。我们提出了一系列利用计算候选者机会的技术,目的是最大程度地减少对边缘设备中辅助存储的访问成本。进行实验以验证所提出技术的效率,证明所提出框架的可行性。

更新日期:2021-01-04
down
wechat
bug