当前位置: X-MOL 学术IEEE J. Emerg. Sel. Top. Circuits Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Modeling Attack Resistant PUFs Based on Adversarial Attack Against Machine Learning
IEEE Journal on Emerging and Selected Topics in Circuits and Systems ( IF 3.7 ) Pub Date : 2021-02-26 , DOI: 10.1109/jetcas.2021.3062413
Sying-Jyan Wang, Yu-Sheng Chen, Katherine Shu-Min Li

The Physical Unclonable Function (PUF) has been proposed for the identification and authentication of devices and cryptographic key generation. A strong PUF provides an extremely large number of device-specific challenge-response pairs (CRP) which can be used for authentication. Unfortunately, the CRP mechanism is vulnerable to modeling attack, which uses machine learning (ML) algorithms to predict PUF responses. Many methods have been developed to strengthen strong PUFs; however, recent studies show that they are still vulnerable under refined ML algorithms with enhanced computing power. In this article, we propose to defend PUFs against modeling attacks from a different perspective. By modifying the CRP mechanism, a PUF can provide contradictory data such that an accurate prediction model of the PUF under attack cannot be built. Three different levels of threats are analyzed, and experimental results show that the proposed method provides an effective countermeasure against ML based modeling attacks. The proposed protection mechanism is validated using FPGA, and the results show that the performance of PUFs is also improved with the help of the proposed protection mechanism. In addition, the proposed method is compatible with hardware strengthening schemes to provide even better protection for PUFs.

中文翻译:

基于对抗机器学习的对抗性攻击的抗攻击 PUF 建模

物理不可克隆功能 (PUF) 已被提议用于设备的识别和认证以及加密密钥的生成。强大的 PUF 提供了大量可用于身份验证的特定于设备的质询-响应对 (CRP)。不幸的是,CRP 机制容易受到建模攻击,它使用机器学习 (ML) 算法来预测 PUF 响应。已经开发了许多方法来增强强 PUF;然而,最近的研究表明,在具有增强计算能力的精炼 ML 算法下,它们仍然容易受到攻击。在本文中,我们建议从不同的角度保护 PUF 免受建模攻击。通过修改 CRP 机制,PUF 可以提供相互矛盾的数据,从而无法建立准确的 PUF 受攻击预测模型。分析了三种不同级别的威胁,实验结果表明,该方法提供了一种有效的对抗基于 ML 的建模攻击的对策。使用FPGA对提出的保护机制进行了验证,结果表明,在提出的保护机制的帮助下,PUF的性能也得到了提高。此外,所提出的方法与硬件强化方案兼容,可为 PUF 提供更好的保护。
更新日期:2021-02-26
down
wechat
bug