PUF Interfaces and their Security

Marten van Dijk, U. Rührmair
{"title":"PUF Interfaces and their Security","authors":"Marten van Dijk, U. Rührmair","doi":"10.1109/ISVLSI.2014.90","DOIUrl":null,"url":null,"abstract":"In practice, any integrated physical unclonable function (PUF) must be accessed through a logical interface. The interface may add additional functionalities such as access control, implement a (measurement) noise reduction layer, etc. In many PUF applications, the interface in fact hides the PUF itself: users only interact with the PUF's interface, and cannot \"see\" or verify what is behind the interface. This immediately gives rise to a security problem: how does the user know he is interacting with a properly behaving interface wrapped around a proper PUF? This question is not merely theoretical, but has strong relevance for PUF application security: It has been shown recently that a badly behaving interface could, e.g., log a history of PUF queries which an adversary can read out using some trapdoor, or may output \"false\" PUF responses that the adversary can predict or influence RvD-IEEESP13. This allows attacks on a considerable number of PUF protocols RvD-IEEESP13. Since we currently do not know how to authenticate proper interface behavior in practice, the security of many PUF applications implicitly rests on the mere assumption that an adversary cannot modify or enhance a PUF interface in a \"bad\" way. This is quite a strong hypothesis, which should be stated more explicitly in the literature. In this paper, we explicitly address this point, following and partly expanding earlier works RvD-IEEESP13. We add to the picture the need for rigorous security which is characterized by some security parameter λ (an adversary has \"negl(λ) probability to successfully software clone/model a PUF\"). First, this means that we need so-called Strong PUFs with a larger than poly(λ) input/challenge space. In order to have scalable PUF designs (which do not blow up in chip surface or volume for increasing λ), we need PUF designs which constitute of a \"algebraic\" composition of smaller basic building blocks/devices. In such compositions the security relies on a less well-established computational hardness assumption which states that machine learning and other modeling methods with poly(λ) runtime cannot reliably produce a software clone of the PUF. To provide rigorous security we argue that the PUF interface needs a one-way postprocessing of PUF responses such that the security can be reduced to the infeasibility of breaking the one-way property of the postprocessing. This leads to a set of interesting problems: how do we add noise reduction into this picture and how do we minimize or eliminate side channel leakage of computed intermediate values in the post processing?","PeriodicalId":405755,"journal":{"name":"2014 IEEE Computer Society Annual Symposium on VLSI","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Computer Society Annual Symposium on VLSI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISVLSI.2014.90","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

In practice, any integrated physical unclonable function (PUF) must be accessed through a logical interface. The interface may add additional functionalities such as access control, implement a (measurement) noise reduction layer, etc. In many PUF applications, the interface in fact hides the PUF itself: users only interact with the PUF's interface, and cannot "see" or verify what is behind the interface. This immediately gives rise to a security problem: how does the user know he is interacting with a properly behaving interface wrapped around a proper PUF? This question is not merely theoretical, but has strong relevance for PUF application security: It has been shown recently that a badly behaving interface could, e.g., log a history of PUF queries which an adversary can read out using some trapdoor, or may output "false" PUF responses that the adversary can predict or influence RvD-IEEESP13. This allows attacks on a considerable number of PUF protocols RvD-IEEESP13. Since we currently do not know how to authenticate proper interface behavior in practice, the security of many PUF applications implicitly rests on the mere assumption that an adversary cannot modify or enhance a PUF interface in a "bad" way. This is quite a strong hypothesis, which should be stated more explicitly in the literature. In this paper, we explicitly address this point, following and partly expanding earlier works RvD-IEEESP13. We add to the picture the need for rigorous security which is characterized by some security parameter λ (an adversary has "negl(λ) probability to successfully software clone/model a PUF"). First, this means that we need so-called Strong PUFs with a larger than poly(λ) input/challenge space. In order to have scalable PUF designs (which do not blow up in chip surface or volume for increasing λ), we need PUF designs which constitute of a "algebraic" composition of smaller basic building blocks/devices. In such compositions the security relies on a less well-established computational hardness assumption which states that machine learning and other modeling methods with poly(λ) runtime cannot reliably produce a software clone of the PUF. To provide rigorous security we argue that the PUF interface needs a one-way postprocessing of PUF responses such that the security can be reduced to the infeasibility of breaking the one-way property of the postprocessing. This leads to a set of interesting problems: how do we add noise reduction into this picture and how do we minimize or eliminate side channel leakage of computed intermediate values in the post processing?
PUF接口及其安全性
在实践中,任何集成的物理不可克隆功能(PUF)都必须通过逻辑接口访问。该接口可以添加额外的功能,如访问控制、实现(测量)降噪层等。在许多PUF应用程序中,接口实际上隐藏了PUF本身:用户只与PUF的接口交互,而不能“看到”或验证接口背后的内容。这立即引起了一个安全问题:用户如何知道他正在与围绕正确PUF的行为正确的接口进行交互?这个问题不仅仅是理论上的,而且与PUF应用程序安全性有很强的相关性:最近有研究表明,行为不良的接口可能记录PUF查询的历史记录,攻击者可以使用一些陷阱门读取这些记录,或者可能输出攻击者可以预测或影响RvD-IEEESP13的“错误”PUF响应。这允许对相当数量的PUF协议rvd - ieee - esp13进行攻击。由于我们目前不知道如何在实践中验证正确的接口行为,因此许多PUF应用程序的安全性隐含地依赖于对手不能以“不良”方式修改或增强PUF接口的假设。这是一个相当有力的假设,应该在文献中更明确地说明。在本文中,我们明确地解决了这一点,遵循并部分扩展了早期的工作RvD-IEEESP13。我们增加了对严格安全性的需求,其特征是某些安全参数λ(攻击者具有“成功软件克隆/建模PUF的概率为零(λ)”)。首先,这意味着我们需要具有大于poly(λ)输入/挑战空间的所谓强puf。为了具有可扩展的PUF设计(不会因增加λ而在芯片表面或体积上爆炸),我们需要PUF设计由较小的基本构建块/设备的“代数”组成。在这样的组合中,安全性依赖于一个不太完善的计算硬度假设,该假设指出,机器学习和其他具有poly(λ)运行时的建模方法不能可靠地产生PUF的软件克隆。为了提供严格的安全性,我们认为PUF接口需要对PUF响应进行单向后处理,以便将安全性降低到不可能破坏后处理的单向属性。这导致了一系列有趣的问题:我们如何在这张图片中加入降噪,以及我们如何在后期处理中最小化或消除计算中间值的侧通道泄漏?
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信