Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Approximation Attacks on Strong PUFs
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems ( IF 2.9 ) Pub Date : 2020-10-01 , DOI: 10.1109/tcad.2019.2962115
Junye Shi , Yang Lu , Jiliang Zhang

Physical unclonable function (PUF) is a promising lightweight hardware security primitive for resource-constrained systems. It can generate a large number of challenge-response pairs (CRPs) for device authentication based on process variations. However, attackers can collect the CRPs to build a machine learning (ML) model with high prediction accuracy for the PUF. Recently, a lot of ML-resistant PUF structures have been proposed, e.g., a multiplexer-based PUF (MPUF) was introduced to resist ML attacks and its two variants (rMPUF and cMPUF) were further proposed to resist reliability-based and cryptanalysis modeling attacks, respectively. In this article, we propose a general framework for ML attacks on strong PUFs, then based on the framework, we present two novel modeling attacks, named logical approximation and global approximation, that use artificial neural network (ANN) to characterize the nonlinear structure of MPUF, rMPUF, cMPUF, and XOR Arbiter PUF. The logical approximation method uses linear functions to approximate logical operations and builds a precise soft model based on the combination of logical gates in the PUF. The global approximation method uses the function sinc with filtering characteristics to fit the mapping relationship between the challenge and response. The experimental results show that the proposed two approximation attacks can successfully model the ( $n$ , $k$ )-MPUF ( $k= 3, 4$ ), ( $n$ , $k$ )-rMPUF ( $k = 2, 3$ ), cMPUF ( $k = 4, 5$ ), and $l$ -XOR Arbiter PUF ( $l= 3, 4, 5$ ) ( $n = 32, 64$ ) with the average accuracies of 96.85%, 95.33%, 94.52%, and 96.26%, respectively.

中文翻译:

对强 PUF 的近似攻击

物理不可克隆函数 (PUF) 是一种用于资源受限系统的有前途的轻量级硬件安全原语。它可以根据过程变化生成大量用于设备身份验证的质询-响应对 (CRP)。但是,攻击者可以收集 CRP 来构建具有高预测精度的 PUF 机器学习 (ML) 模型。最近,已经提出了许多抗ML的PUF结构,例如,引入了基于多路复用器的PUF(MPUF)来抵抗ML攻击,并进一步提出了其两种变体(rMPUF和cMPUF)以抵抗基于可靠性和密码分析建模分别攻击。在本文中,我们提出了一个针对强 PUF 的 ML 攻击的通用框架,然后基于该框架,我们提出了两种新颖的建模攻击,称为逻辑逼近和全局逼近,使用人工神经网络 (ANN) 来表征 MPUF、rMPUF、cMPUF 和 XOR Arbiter PUF 的非线性结构。逻辑逼近方法使用线性函数逼近逻辑运算,并基于PUF中逻辑门的组合构建精确的软模型。全局逼近方法使用函数具有过滤特性来拟合挑战和响应之间的映射关系。实验结果表明,所提出的两种近似攻击可以成功地对( $n$ , $千$ )-MPUF ( $k= 3, 4$ ), ( $n$ , $千$ )-rMPUF ( $k = 2, 3$ ), cMPUF ( $k = 4, 5$ ), 和 $l$ -XOR 仲裁器 PUF ( $l= 3, 4, 5$ ) ( $n = 32, 64$ ) 的平均准确率分别为 96.85%、95.33%、94.52% 和 96.26%。
更新日期:2020-10-01
down
wechat
bug