当前位置: X-MOL 学术IEEE Trans. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MulTa-HDC: A Multi-Task Learning Framework For Hyperdimensional Computing
IEEE Transactions on Computers ( IF 3.6 ) Pub Date : 2021-04-15 , DOI: 10.1109/tc.2021.3073409
Cheng-Yang Chang , Yu-Chuan Chuang , En-Jui Chang , An-Yeu Andy Wu

Brain-inspired Hyperdimensional computing (HDC) has shown its effectiveness in low-power/energy designs for edge computing in the Internet of Things (IoT). Due to limited resources available on edge devices, multi-task learning (MTL), which accommodates multiple cognitive tasks in one model, is considered a more efficient deployment of HDC. However, as the number of tasks increases, MTL-based HDC (MTL-HDC) suffers from the huge overhead of associative memory (AM) and performance degradation. This hinders MTL-HDC from the practical realization on edge devices. This article aims to establish an MTL framework for HDC to achieve a flexible and efficient trade-off between memory overhead and performance degradation. For the shared-AM approach, we propose Dimension Ranking for Effective AM Sharing (DREAMS) to effectively merge multiple AMs while preserving as much information of each task as possible. For the independent-AM approach, we propose Dimension Ranking for Independent MEmory Retrieval (DRIMER) to extract and concatenate informative components of AMs while mitigating interferences among tasks. By leveraging both mechanisms, we propose a hybrid framework of Multi-Tasking HDC, called MulTa-HDC. To adapt an MTL-HDC system to an edge device given a memory resource budget, MulTa-HDC utilizes three parameters to flexibly adjust the proportion of the shared AM and independent AMs. The proposed MulTa-HDC is widely evaluated across three common benchmarks under two standard task protocols. The simulation results of ISOLET, UCIHAR, and MNIST datasets demonstrate that the proposed MulTa-HDC outperforms other state-of-the-art compressed HD models, including SparseHD and CompHD, by up to 8.23% in terms of classification accuracy.

中文翻译:


MulTa-HDC:超维计算的多任务学习框架



类脑超维计算 (HDC) 已在物联网 (IoT) 边缘计算的低功耗/能耗设计中展现出其有效性。由于边缘设备上可用的资源有限,多任务学习(MTL)在一个模型中容纳多个认知任务,被认为是一种更有效的 HDC 部署。然而,随着任务数量的增加,基于 MTL 的 HDC(MTL-HDC)面临着关联内存(AM)的巨大开销和性能下降。这阻碍了MTL-HDC在边缘设备上的实际实现。本文旨在建立HDC的MTL框架,以实现内存开销和性能下降之间灵活高效的权衡。对于共享 AM 方法,我们提出了有效 AM 共享维度排名 (DREAMS),以有效合并多个 AM,同时保留每个任务的尽可能多的信息。对于独立 AM 方法,我们提出了独立内存检索维度排序 (DRIMER),以提取和连接 AM 的信息成分,同时减轻任务之间的干扰。通过利用这两种机制,我们提出了一种多任务 HDC 的混合框架,称为 MulTa-HDC。为了使 MTL-HDC 系统适应给定内存资源预算的边缘设备,MulTa-HDC 利用三个参数来灵活调整共享 AM 和独立 AM 的比例。所提出的 MulTa-HDC 在两个标准任务协议下的三个常见基准上进行了广泛评估。 ISOLET、UCIHAR 和 MNIST 数据集的模拟结果表明,所提出的 MulTa-HDC 在分类精度方面优于其他最先进的压缩 HD 模型,包括 SparseHD 和 CompHD,高达 8.23%。
更新日期:2021-04-15
down
wechat
bug