{"title":"PUF Interfaces and their Security","authors":"Marten van Dijk, U. Rührmair","doi":"10.1109/ISVLSI.2014.90","DOIUrl":null,"url":null,"abstract":"In practice, any integrated physical unclonable function (PUF) must be accessed through a logical interface. The interface may add additional functionalities such as access control, implement a (measurement) noise reduction layer, etc. In many PUF applications, the interface in fact hides the PUF itself: users only interact with the PUF's interface, and cannot \"see\" or verify what is behind the interface. This immediately gives rise to a security problem: how does the user know he is interacting with a properly behaving interface wrapped around a proper PUF? This question is not merely theoretical, but has strong relevance for PUF application security: It has been shown recently that a badly behaving interface could, e.g., log a history of PUF queries which an adversary can read out using some trapdoor, or may output \"false\" PUF responses that the adversary can predict or influence RvD-IEEESP13. This allows attacks on a considerable number of PUF protocols RvD-IEEESP13. Since we currently do not know how to authenticate proper interface behavior in practice, the security of many PUF applications implicitly rests on the mere assumption that an adversary cannot modify or enhance a PUF interface in a \"bad\" way. This is quite a strong hypothesis, which should be stated more explicitly in the literature. In this paper, we explicitly address this point, following and partly expanding earlier works RvD-IEEESP13. We add to the picture the need for rigorous security which is characterized by some security parameter λ (an adversary has \"negl(λ) probability to successfully software clone/model a PUF\"). First, this means that we need so-called Strong PUFs with a larger than poly(λ) input/challenge space. In order to have scalable PUF designs (which do not blow up in chip surface or volume for increasing λ), we need PUF designs which constitute of a \"algebraic\" composition of smaller basic building blocks/devices. In such compositions the security relies on a less well-established computational hardness assumption which states that machine learning and other modeling methods with poly(λ) runtime cannot reliably produce a software clone of the PUF. To provide rigorous security we argue that the PUF interface needs a one-way postprocessing of PUF responses such that the security can be reduced to the infeasibility of breaking the one-way property of the postprocessing. This leads to a set of interesting problems: how do we add noise reduction into this picture and how do we minimize or eliminate side channel leakage of computed intermediate values in the post processing?","PeriodicalId":405755,"journal":{"name":"2014 IEEE Computer Society Annual Symposium on VLSI","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Computer Society Annual Symposium on VLSI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISVLSI.2014.90","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In practice, any integrated physical unclonable function (PUF) must be accessed through a logical interface. The interface may add additional functionalities such as access control, implement a (measurement) noise reduction layer, etc. In many PUF applications, the interface in fact hides the PUF itself: users only interact with the PUF's interface, and cannot "see" or verify what is behind the interface. This immediately gives rise to a security problem: how does the user know he is interacting with a properly behaving interface wrapped around a proper PUF? This question is not merely theoretical, but has strong relevance for PUF application security: It has been shown recently that a badly behaving interface could, e.g., log a history of PUF queries which an adversary can read out using some trapdoor, or may output "false" PUF responses that the adversary can predict or influence RvD-IEEESP13. This allows attacks on a considerable number of PUF protocols RvD-IEEESP13. Since we currently do not know how to authenticate proper interface behavior in practice, the security of many PUF applications implicitly rests on the mere assumption that an adversary cannot modify or enhance a PUF interface in a "bad" way. This is quite a strong hypothesis, which should be stated more explicitly in the literature. In this paper, we explicitly address this point, following and partly expanding earlier works RvD-IEEESP13. We add to the picture the need for rigorous security which is characterized by some security parameter λ (an adversary has "negl(λ) probability to successfully software clone/model a PUF"). First, this means that we need so-called Strong PUFs with a larger than poly(λ) input/challenge space. In order to have scalable PUF designs (which do not blow up in chip surface or volume for increasing λ), we need PUF designs which constitute of a "algebraic" composition of smaller basic building blocks/devices. In such compositions the security relies on a less well-established computational hardness assumption which states that machine learning and other modeling methods with poly(λ) runtime cannot reliably produce a software clone of the PUF. To provide rigorous security we argue that the PUF interface needs a one-way postprocessing of PUF responses such that the security can be reduced to the infeasibility of breaking the one-way property of the postprocessing. This leads to a set of interesting problems: how do we add noise reduction into this picture and how do we minimize or eliminate side channel leakage of computed intermediate values in the post processing?