当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Review of Confidentiality Threats Against Embedded Neural Network Models
arXiv - CS - Artificial Intelligence Pub Date : 2021-05-04 , DOI: arxiv-2105.01401
Raphaël Joud, Pierre-Alain Moellic, Rémi Bernhard, Jean-Baptiste Rigaud

Utilization of Machine Learning (ML) algorithms, especially Deep Neural Network (DNN) models, becomes a widely accepted standard in many domains more particularly IoT-based systems. DNN models reach impressive performances in several sensitive fields such as medical diagnosis, smart transport or security threat detection, and represent a valuable piece of Intellectual Property. Over the last few years, a major trend is the large-scale deployment of models in a wide variety of devices. However, this migration to embedded systems is slowed down because of the broad spectrum of attacks threatening the integrity, confidentiality and availability of embedded models. In this review, we cover the landscape of attacks targeting the confidentiality of embedded DNN models that may have a major impact on critical IoT systems, with a particular focus on model extraction and data leakage. We highlight the fact that Side-Channel Analysis (SCA) is a relatively unexplored bias by which model's confidentiality can be compromised. Input data, architecture or parameters of a model can be extracted from power or electromagnetic observations, testifying a real need from a security point of view.

中文翻译:

嵌入式神经网络模型的机密性威胁综述

利用机器学习(ML)算法,尤其是深度神经网络(DNN)模型,已成为许多领域(尤其是基于IoT的系统)广泛接受的标准。DNN模型在医疗诊断,智能运输或安全威胁检测等多个敏感领域中均具有令人印象深刻的性能,并代表了宝贵的知识产权。在过去的几年中,主要的趋势是在各种设备中大规模部署模型。但是,由于广泛的攻击威胁着嵌入式模型的完整性,机密性和可用性,因此迁移到嵌入式系统的速度变慢了。在这篇评论中,我们涵盖了针对嵌入式DNN模型机密性的攻击情况,这些攻击可能会对关键的物联网系统产生重大影响,特别着重于模型提取和数据泄漏。我们强调了一个事实,即旁通道分析(SCA)是一个相对无法探索的偏见,可以通过这种偏见来破坏模型的机密性。可以从功率或电磁观测中提取模型的输入数据,体系结构或参数,从而从安全角度验证实际需求。
更新日期:2021-05-05
down
wechat
bug