R. Zur, Danielle Dori, Sharon Vardi, Ittay Eyal, Aviv Tamar
{"title":"Deep Bribe: Predicting the Rise of Bribery in Blockchain Mining with Deep RL","authors":"R. Zur, Danielle Dori, Sharon Vardi, Ittay Eyal, Aviv Tamar","doi":"10.1109/SPW59333.2023.00008","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00008","url":null,"abstract":"Blockchain security relies on incentives to ensure participants, called miners, cooperate and behave as the protocol dictates. Such protocols have a security threshold – a miner whose relative computational power is larger than the threshold can deviate to improve her revenue. Moreover, blockchain participants can behave in a petty compliant manner: usually follow the protocol, but deviate to increase revenue when deviation cannot be distinguished externally from the prescribed behavior. The effect of petty compliant miners on the security threshold of blockchains is not well understood. Due to the complexity of the analysis, it remained an open question since Carlsten et al. identified it in 2016. In this work, we use deep Reinforcement Learning (RL) to analyze how a rational miner performs selfish mining by deviating from the protocol to maximize revenue when petty compliant miners are present. We find that a selfish miner can exploit petty compliant miners to increase her revenue by bribing them. Our method reveals that the security threshold is lower when petty compliant miners are present. In particular, with parameters estimated from the Bitcoin blockchain, we find the threshold drops from the known value of 25% to only 21% (or 19%) when 50% (or 75%) of the other miners are petty compliant. Hence, our deep RL analysis puts the open question to rest; the presence of petty compliant miners exacerbates a blockchain's vulnerability to selfish mining and is a major security threat.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127810765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The ghost is the machine: Weird machines in transient execution","authors":"Ping-Lun Wang, Fraser Brown, R. Wahby","doi":"10.1109/SPW59333.2023.00029","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00029","url":null,"abstract":"Microarchitectural attacks typically exploit some form of transient execution to steal sensitive data. More recently, though, a new class of attacks has used transient execution to (covertly) compute: Wampler et al. use Spectre primitives to obfuscate control flow, and Evtyushkin et al. construct “weird” logic gates that use Intel's TSX to compute entirely using microarchitectural side effects (i.e., in a cache side channel). This paper generalizes weird gate constructions beyond TSX and shows how to build such gates using any transient execution primitive. We build logic gates using exceptions, the branch predictor, and the branch target buffer, and we design a NOT gate that appears to perform roughly one order of magnitude11The data in the original paper reports XOR execution speed and XOR executions per second that do not agree with one another. Taking the execution speed at face value, our construction is two orders of magnitude faster; instead, we calculate a faster execution speed for their reported executions per second, and our approach only yields an order of magnitude improvement. better than the prior state of the art. These constructions work on AMD, Intel, and ARM machines with ≈95-99% accuracy; a million AND gate executions take from half a second (when built with TSX) to four and a half seconds (when built with the branch target buffer). Our results indicate that weird gates are more generally applicable than previously known and may become more widely used, e.g., for malware obfuscation.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133864266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Little Seal Bug: Optical Sound Recovery from Lightweight Reflective Objects","authors":"Ben Nassi, R. Swissa, Y. Elovici, B. Zadov","doi":"10.1109/SPW59333.2023.00032","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00032","url":null,"abstract":"In recent years, various studies have demonstrated methods to recover sound/speech with an optical sensor. Fortunately, each of these methods possess drawbacks limiting their utility (e.g., limited to recovering sounds at high volumes, utilize a sensor indicating their use, rely on objects not commonly found in offices, require preliminary data collection, etc.). One unaddressed method of recovering speech optically is via observing lightweight reflective objects (e.g., iced coffee can, smartphone stand, desk ornament) with a photodiode, an optical sensor used to convert photons to electricity. In this paper, we present the ‘little seal bug’ attack, an optical side-channel attack which exploits fluctuations in air pressure on the surface of a shiny object occurring in response to sound, to recover speech optically and passively using a photodiode. These air pressure fluctuations cause the shiny object to vibrate and reflect light modulated by the nearby sound; as a result, these objects can be used by eavesdroppers (e.g., private investigator, surveilling spouse) to recover the content of a victim's conversation when the victim is near such objects. We show how to determine the sensitivity specifications of the optical equipment (photodiode, ADC, etc.) needed to recover the minuscule vibrations of lightweight shiny objects caused by the surrounding sound waves. Given the optical measurements obtained from light reflected off shiny objects, we design and utilize an algorithm to isolate the speech contents from the optical measurements. In our evaluation of the ‘little seal bug’ attack, we compare its performance to that of related methods. We find eavesdroppers can exploit various lightweight shiny objects to optically recover the content of conversations at equal/higher quality than prior methods (fair-excellent intelligibility) while doing so from greater distances (up to 35 meters) and lower speech volumes (75 dB). We conclude that lightweight shiny objects are a potent attack vector for recovering speech optically, and can be harmful to victims being targeted for sensitive information conveyed in a spoken conversation (e.g., in cases of corporate espionage or intimate partner violence/surveillance) when seated at a desk near a lightweight reflective object.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134269689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Survey of Parser Differential Anti-Patterns","authors":"Sameed Ali, Sean W. Smith","doi":"10.1109/SPW59333.2023.00016","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00016","url":null,"abstract":"Parser differentials emerge when two (or more) parsers interpret the same input in different ways. Differences in parsing behavior are difficult to detect due to (1) challenges in abstracting out the parser from complex code-bases and (2) proving the equivalence of parsers. Parser differentials remain understudied as they are a novel unexpected bug resulting from the interaction of software components—sometimes even independent modules—which may individually appear bug-free. We present a survey of many known parser differentials and conduct a root-cause analysis of them. We do so with an aim to uncover insights on how we can best conceptualize the underlying causes of their emergence. In studying these differentials, we have isolated certain design anti-patterns that give rise to parser differentials in software systems. We show how these differentials do not fit nicely into the state-of-the-art model of parser differentials and thus propose improvements to it.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127897255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Your Email Address Holds the Key: Understanding the Connection Between Email and Password Security with Deep Learning","authors":"Etienne Salimbeni, Nina Mainusch, Dario Pasquini","doi":"10.1109/SPW59333.2023.00015","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00015","url":null,"abstract":"In this work, we investigate the effectiveness of deep-learning-based password guessing models for targeted attacks on human-chosen passwords. In recent years, service providers have increased the level of security of users' passwords. This is done by requiring more complex password generation patterns and by using computationally expensive hash functions. For the attackers this means a reduced number of available guessing attempts, which introduces the necessity to target their guess by exploiting a victim's publicly available information. In this work, we introduce a context-aware password guessing model that better capture attackers' behavior. We demonstrate that knowing a victim's email address is already critical in compromising the associated password and provide an in-depth analysis of the relationship between them. We also show the potential of such models to identify clusters of users based on their password generation behaviour, which can spot fake profiles and populations more vulnerable to context-aware guesses. The code is publicly available at https://github.com/spring-epfl/DCM_sp.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126373397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyungchan Lim, Joshua H. Kang, Matthew Dixson, Hyungjoon Koo, Doowon Kim
{"title":"Evaluating Password Composition Policy and Password Meters of Popular Websites","authors":"Kyungchan Lim, Joshua H. Kang, Matthew Dixson, Hyungjoon Koo, Doowon Kim","doi":"10.1109/SPW59333.2023.00006","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00006","url":null,"abstract":"Password-based authentication is one of the most commonly adopted mechanisms for online security. Choosing strong passwords is crucial for protecting ones' digital identities and assets, as weak passwords can be readily guessable, resulting in a compromise such as unauthorized access. To promote the use of strong passwords on the Web, the National Institute of Standards and Technology (NIST) provides website administrators with password composition policy (PCP) guidelines. We manually inspect popular websites to check if their password policies conform to NIST's PCP guidelines by generating passwords that meet each criterion and testing the 100 popular websites. Our findings reveal that a considerable number of web sites (on average, 53.5 %) do not comply with the guidelines, which could result in password breaches.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"516 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133070502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pietro Borrello, Catherine Easdon, Martin Schwarzl, Roland Czerny, Michael Schwarz
{"title":"CustomProcessingUnit: Reverse Engineering and Customization of Intel Microcode","authors":"Pietro Borrello, Catherine Easdon, Martin Schwarzl, Roland Czerny, Michael Schwarz","doi":"10.1109/SPW59333.2023.00031","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00031","url":null,"abstract":"Microcode provides an abstraction layer over the instruction set to decompose complex instructions into simpler micro-operations that can be more easily implemented in hardware. It is an essential optimization to simplify the design of x86 processors. However, introducing an additional layer of software beneath the instruction set poses security and reliability concerns. The microcode details are confidential to the manufacturers, preventing independent auditing or customization of the microcode. Moreover, microcode patches are signed and encrypted to prevent unauthorized patching and reverse engineering. However, recent research has recovered decrypted microcode and reverse-engineered read/write debug mechanisms on Intel Goldmont (Atom), making analysis and customization of microcode possible on a modern Intel microarchitecture. In this work, we present the first framework for static and dynamic analysis of Intel microcode. Building upon prior research, we reverse-engineer Goldmont microcode semantics and reconstruct the patching primitives for microcode customization. For static analysis, we implement a Ghidra processor module for decompilation and analysis of decrypted microcode. For dynamic analysis, we create a UEFI application that can trace and patch microcode to provide complete microcode control on Goldmont systems. Leveraging our framework, we reverse-engineer the confidential Intel microcode update algorithm and perform the first security analysis of its design and implementation. In three further case studies, we illustrate the potential security and performance benefits of microcode customization. We provide the first x86 Pointer Authentication Code (PAC) microcode implementation and its security evaluation, design and implement fast software breakpoints that are more than 1000x faster than standard breakpoints, and present constant-time microcode division, illustrating the potential security and performance benefits of microcode customization.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130737765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Till Gehlhar, F. Marx, T. Schneider, Ajith Suresh, Tobias Wehrle, Hossein Yalame
{"title":"SafeFL: MPC-friendly Framework for Private and Robust Federated Learning","authors":"Till Gehlhar, F. Marx, T. Schneider, Ajith Suresh, Tobias Wehrle, Hossein Yalame","doi":"10.1109/SPW59333.2023.00012","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00012","url":null,"abstract":"Federated learning (FL) has gained widespread popularity in a variety of industries due to its ability to locally train models on devices while preserving privacy. However, FL systems are susceptible to i) privacy inference attacks and ii) poisoning attacks, which can compromise the system by corrupt actors. Despite a significant amount of work being done to tackle these attacks individually, the combination of these two attacks has received limited attention in the research community. To address this gap, we introduce SafeFL, a secure multiparty computation (MPC)-based framework designed to assess the efficacy of FL techniques in addressing both privacy inference and poisoning attacks. The heart of the SafeFL framework is a communicator interface that enables PyTorch-based implementations to utilize the well-established MP-SPDZ framework, which implements various MPC protocols. The goal of SafeFL is to facilitate the development of more efficient FL systems that can effectively address privacy inference and poisoning attacks.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133938460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GPThreats-3: Is Automatic Malware Generation a Threat?","authors":"Marcus Botacin","doi":"10.1109/SPW59333.2023.00027","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00027","url":null,"abstract":"Recent research advances introduced large textual models, of which GPT-3 is state-of-the-art. They enable many applications, such as generating text and code. Whereas the model's capabilities might be explored for good, they might also cause some negative impact: The model's code generation capabilities might be used by attackers to assist in malware creation, a phenomenon that must be understood. In this work, our goal is to answer the question: Can current large textual models (represented by GPT-3) already be used by attackers to generate malware? If so: How can attackers use these models? We explore multiple coding strategies, ranging from the entire mal ware description to separate descriptions of mal ware functions that can be used as building blocks. We also test the model's ability to rewrite malware code in multiple manners. Our experiments show that GPT-3 still has trouble generating entire malware samples from complete descriptions but that it can easily construct malware via building block descriptions. It also still has limitations to understand the described contexts, but once it is done it generates multiple versions of the same semantic (malware variants), whose detection rate significantly varies (from 4 to 55 Virustotal AV s).","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132942760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Blind Spots: Identifying Exploitable Program Inputs","authors":"Henrik Brodin, Marek Surovic, E. Sultanik","doi":"10.1109/SPW59333.2023.00021","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00021","url":null,"abstract":"A blind spot is any input to a program that can be arbitrarily mutated without affecting the program's output. Blind spots can be used for steganography or to embed malware payloads. If blind spots overlap file format keywords, they indicate parsing bugs that can lead to exploitable differentials. For example, one could craft a document that renders one way in one viewer and a completely different way in another viewer. They have also been used to circumvent code signing in Android binaries, to coerce certificate authorities to misbehave, and to execute HTTP request smuggling and parameter pollution attacks. This paper formalizes the operational semantics of blind spots, leading to a technique based on dynamic information flow tracking that automatically detects blind spots. An efficient implementation is introduced and evaluated against a corpus of over a thousand diverse PDFs parsed through MµPDF11https://mupdf.com/, revealing exploitable bugs in the parser. All of the blind spot classifications are confirmed to be correct and the missed detection rate is no higher than 11 %. On average, at least 5 % of each PDF file is completely ignored by the parser. Our results show promise that this technique is an efficient automated means to detect exploitable parser bugs, over-permissiveness and differentials. Nothing in the technique is tied to PDF in general, so it can be immediately applied to other notoriously difficult-to-parse formats like ELF, X.509, and XML.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124332600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}