{"title":"POSTER: On searching information leakage of Python model execution to detect adversarial examples","authors":"Chenghua Guo, Fang Yu","doi":"10.1145/3579856.3592828","DOIUrl":"https://doi.org/10.1145/3579856.3592828","url":null,"abstract":"The predictive capabilities of machine learning models have improved significantly in recent years, leading to their widespread use in various fields. However, these models remain vulnerable to adversarial attacks, where carefully crafted inputs can mislead predictions and compromise the security of critical systems. Therefore, it is crucial to develop effective methods for detecting and preventing such attacks. Given that many neural network models are implemented using Python, this study addresses the issue of detecting adversarial examples from a new perspective by investigating information leakage in their Python model executions. To realize this objective, we propose a novel Python interpreter that utilizes Python bytecode instrumentation to profile layer-wise instruction-level program executions. We then search for information leakage on both legal and adversarial inputs, identifying their side-channel differences in call executions (i.e., call count, return values, and execution time) and synthesize the detection rule accordingly. Our approach is evaluated against TorchAttacks, AdvDoor, and RNN-Test attacks, targeting various models and applications. Our findings indicate that while there is call-return-value leakage on TorchAttacks images, there is no leakage to detect AdvDoor and RNN-Test attacks based on execution time or return values of string, integer, float, and Boolean type functions.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129567255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"POSTER: ML-Compass: A Comprehensive Assessment Framework for Machine Learning Models","authors":"Zhibo Jin, Zhiyu Zhu, Hongsheng Hu, Minhui Xue, Huaming Chen","doi":"10.1145/3579856.3592823","DOIUrl":"https://doi.org/10.1145/3579856.3592823","url":null,"abstract":"Machine learning models have made significant breakthroughs across various domains. However, it is crucial to assess these models to obtain a complete understanding of their capabilities and limitations and ensure their effectiveness and reliability in solving real-world problems. In this paper, we present a framework, termed ML-Compass, that covers a broad range of machine learning abilities, including utility evaluation, neuron analysis, robustness evaluation, and interpretability examination. We use this framework to assess seven state-of-the-art classification models on four benchmark image datasets. Our results indicate that different models exhibit significant variation, even when trained on the same dataset. This highlights the importance of using the assessment framework to comprehend their behavior.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132945665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Secure Context Switching of Masked Software Implementations","authors":"Barbara Gigerl, R. Primas, S. Mangard","doi":"10.1145/3579856.3595798","DOIUrl":"https://doi.org/10.1145/3579856.3595798","url":null,"abstract":"Cryptographic software running on embedded devices requires protection against physical side-channel attacks such as power analysis. Masking is a widely deployed countermeasure against these attacks and is directly implemented on algorithmic level. Many works study the security of masked cryptographic software on CPUs, pointing out potential problems on algorithmic/microarchitecture-level, as well as corresponding solutions, and even show masked software can be implemented efficiently and with strong (formal) security guarantees. However, these works also make the implicit assumption that software is executed directly on the CPU without any abstraction layers in-between, i.e., they focus exclusively on the bare-metal case. Many practical applications, including IoT and automotive/industrial environments, require multitasking embedded OSs on which masked software runs as one out of many concurrent tasks. For such applications, the potential impact of events like context switches on the secure execution of masked software has not been studied so far at all. In this paper, we provide the first security analysis of masked cryptographic software spanning all three layers (SW, OS, CPU). First, we apply a formal verification approach to identify leaks within the execution of masked software that are caused by the embedded OS itself, rather than on algorithmic or microarchitecture level. After showing that these leaks are primarily caused by context switching, we propose several different strategies to harden a context switching routine against such leakage, ultimately allowing masked software from previous works to remain secure when being executed on embedded OSs. Finally, we present a case study focusing on FreeRTOS, a popular embedded OS for embedded devices, running on a RISC-V core, allowing us to evaluate the practicality and ease of integration of each strategy.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":" 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132011361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Abbadini, Dario Facchinetti, Gianluca Oldani, Matthew Rossi, S. Paraboschi
{"title":"Cage4Deno: A Fine-Grained Sandbox for Deno Subprocesses","authors":"M. Abbadini, Dario Facchinetti, Gianluca Oldani, Matthew Rossi, S. Paraboschi","doi":"10.1145/3579856.3595799","DOIUrl":"https://doi.org/10.1145/3579856.3595799","url":null,"abstract":"Deno is a runtime for JavaScript and TypeScript that is receiving great interest by developers, and is increasingly used for the construction of back-ends of web applications. A primary goal of Deno is to provide a secure and isolated environment for the execution of JavaScript programs. It also supports the execution of subprocesses, unfortunately without providing security guarantees. In this work we propose Cage4Deno, a set of modifications to Deno enabling the creation of fine-grained sandboxes for the execution of subprocesses. The design of Cage4Deno satisfies the compatibility, transparency, flexibility, usability, security, and performance needs of a modern sandbox. The realization of these requirements partially stems from the use of Landlock and eBPF, two robust and efficient security technologies. Significant attention has been paid to the design of a flexible and compact policy model consisting of RWX permissions, which can be automatically created, and deny rules to declare exceptions. The sandbox effectiveness is demonstrated by successfully blocking a number of exploits for recent CVEs, while runtime experiments prove its efficiency. The proposal is associated with an open-source implementation.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114143646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Christou, Grigoris Ntousakis, Eric Lahtinen, S. Ioannidis, V. Kemerlis, Nikos Vasilakis
{"title":"BinWrap: Hybrid Protection against Native Node.js Add-ons","authors":"G. Christou, Grigoris Ntousakis, Eric Lahtinen, S. Ioannidis, V. Kemerlis, Nikos Vasilakis","doi":"10.1145/3579856.3590330","DOIUrl":"https://doi.org/10.1145/3579856.3590330","url":null,"abstract":"Modern applications, written in high-level programming languages, enjoy the security benefits of memory and type safety. Unfortunately, even a single memory-unsafe library can wreak havoc on the rest of an otherwise safe application, nullifying all the security guarantees offered by the high-level language and its managed runtime. We perform a study across the Node.js ecosystem to understand the use patterns of binary add-ons. Taking the identified trends into account, we propose a new hybrid permission model aimed at protecting both a binary add-on and its language-specific wrapper. The permission model is applied all around a native add-on and is enforced through a hybrid language-binary scheme that interposes on accesses to sensitive resources from all parts of the native library. We infer the add-on’s permission set automatically over both its binary and JavaScript sides, via a set of novel program analyses. Applied to a wide variety of native add-ons, we show that our framework, BinWrap, reduces access to sensitive resources, defends against real-world exploits, and imposes an overhead that ranges between 0.71%–10.4%.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124914169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Trade-off SVP-solving Strategy based on a Sharper pnj-BKZ Simulator","authors":"Lei Wang, Yuntao Wang, Baocang Wang","doi":"10.1145/3579856.3595802","DOIUrl":"https://doi.org/10.1145/3579856.3595802","url":null,"abstract":"The lattice-based cryptography is one of the most promising candidates in the era of post-quantum cryptography. It is necessary to precisely choose the practical parameters by evaluating the hardness of the underlying hard mathematical problems, such as the shortest vector problem (SVP). Currently, there are two state-of-the-art strategies for solving (approximate) SVP. One is the SVP-solving strategy proposed in G6K[5], which has the least solving time cost but high memory cost requirements; another is to execute progressive BKZ (pBKZ)[8] for pre-processing at first and call the high-dimensional SVP-oracle to find the short vector on the original lattice. Due to the strong pre-processing on the lattice basis, the memory cost of the latter strategy is usually smaller than that of the former strategy, while the time cost of pre-processing is relatively costly. In this paper, we first optimize the pnj-BKZ simulator when the jump value is quite large by giving a refined dimension for free (d4f) estimation. Then, based on our optimized pnj-BKZ simulator, we show a more accurate hardness estimation of LWE by considering technologies such as progressive BKZ pre-processing technology, jump strategy, and d4f technology. Furthermore, based on the sharper pnj-BKZ simulator, we propose an SVP-solving strategy trade-off between G6K and pBKZ, which derives less time cost than pBKZ within less memory compared with G6K. Experimental results show that when solving the TU Darmstadt SVP challenge, our algorithm can save 50%-66% of memory compared with G6K’s default SVP-solving strategy. Moreover, our algorithm speeds up the pre-processing stage by 7-30 times, saving the time cost by 4-6 times compared with the pBKZ default SVP-solving strategy. Using our proposed strategy, we solved the 170-dimensional TU Darmstadt SVP challenge and up to the 176-dimensional ideal lattice challenge.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116064644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Josh Majors, Edgardo Barsallo Yi, A. Maji, Darren Wu, S. Bagchi, Aravind Machiry
{"title":"Security Properties of Virtual Remotes and SPOOKing their violations","authors":"Josh Majors, Edgardo Barsallo Yi, A. Maji, Darren Wu, S. Bagchi, Aravind Machiry","doi":"10.1145/3579856.3582834","DOIUrl":"https://doi.org/10.1145/3579856.3582834","url":null,"abstract":"As Smart TV devices become more prevalent in our lives, it becomes increasingly important to evaluate the security of these devices. In addition to a smart and connected ecosystem through apps, Smart TV devices expose a WiFi remote protocol, that provides a virtual remote capability and allows a WiFi enabled device (e.g., a Smartphone) to control the Smart TV. The WiFi remote protocol might pose certain security risks that are not present in traditional TVs. In this paper, we assess the security of WiFi remote protocols by first identifying the desired security properties so that we achieve the same level of security as in traditional TVs. Our analysis of four popular Smart TV platforms, Android TV, Amazon FireOS, Roku OS, and WebOS (for LG TVs), revealed that all these platforms violate one or more of the identified security properties. To demonstrate the impact of these flaws, we develop Spook, which uses one of the commonly violated properties of a secure WiFi remote protocol to pair an Android mobile as a software remote to an Android TV. Subsequently, we hijack the Android TV device through the device debugger, enabling complete remote control of the device. All our findings have been communicated to the corresponding vendors. Google acknowledged our findings as a security vulnerability, assigned it a CVE, and released patches to the Android TV OS to partially mitigate the attack. We argue that these patches provide a stopgap solution without ensuring that WiFi remote protocol has all the desired security properties. We design and implement a WiFi remote protocol in the Android ecosystem using ARM TrustZone. Our evaluation shows that the proposed defense satisfies all the security properties and ensures that we have the flexibility of virtual remote without compromising security.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122180257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model Stealing Attacks and Defenses: Where Are We Now?","authors":"N. Asokan","doi":"10.1145/3579856.3596441","DOIUrl":"https://doi.org/10.1145/3579856.3596441","url":null,"abstract":"The success of deep learning in many application domains has been nothing short of dramatic. This has brought the spotlight onto security and privacy concerns with machine learning (ML). One such concern is the threat of model theft. I will discuss work on exploring the threat of model theft, especially in the form of “model extraction attacks” — when a model is made available to customers via an inference interface, a malicious customer can use repeated queries to this interface and use the information gained to construct a surrogate model. I will also discuss possible countermeasures, focusing on deterrence mechanisms that allow for model ownership resolution (MOR) based on watermarking or fingerprinting. In particular, I will discuss the robustness of MOR schemes. I will touch on the issue of conflicts that arise when protection mechanisms for multiple different threats need to be applied simultaneously to a given ML model, using MOR techniques as a case study. This talk is based on work done with my students and collaborators, including Buse Atli Tekgul, Jian Liu, Mika Juuti, Rui Zhang, Samuel Marchal, and Sebastian Szyller. The work was funded in part by Intel Labs in the context of the Private AI consortium.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122745766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Vardhan, Alok Chandrawal, Phakpoom Chinprutthiwong, Yangyong Zhang, G. Gu
{"title":"#DM-Me: Susceptibility to Direct Messaging-Based Scams","authors":"R. Vardhan, Alok Chandrawal, Phakpoom Chinprutthiwong, Yangyong Zhang, G. Gu","doi":"10.1145/3579856.3582815","DOIUrl":"https://doi.org/10.1145/3579856.3582815","url":null,"abstract":"In an emerging scam on social media platforms, cyber-miscreants are luring users into sending them a direct-message (DM) and are subsequently exploiting the messaging channel. We term this attack approach as the DM-Me scam. We report on a survey of 214 MTurk participants, in which we make the first effort to systematically study the susceptibility of users in falling victim to DM-Me scams. We find that most participants chose to send a direct message to at least one scammer, and made such choices more than half the time. This susceptibility can be attributed to the misplaced trust in scammers and the lack of negative consequences foreseen by participants in messaging accounts that they do not fully trust. Interestingly, our results also suggest that women mostly from the 31-40 age-group and who predominantly use Instagram a few times a week are less susceptible than men to financial DM-Me scams as they appear to face more discomfort in initiating a conversation with unfamiliar accounts for such services. We conclude with future research directions in mitigating the risks posed by DM-Me scammers, specifically by developing reliable indicators to aid users in assessing the trustworthiness of an account.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130949614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyunjin Kim, Jinyeong Bak, Kyunghyun Cho, Hyungjoon Koo
{"title":"A Transformer-based Function Symbol Name Inference Model from an Assembly Language for Binary Reversing","authors":"Hyunjin Kim, Jinyeong Bak, Kyunghyun Cho, Hyungjoon Koo","doi":"10.1145/3579856.3582823","DOIUrl":"https://doi.org/10.1145/3579856.3582823","url":null,"abstract":"Reverse engineering of a stripped binary has a wide range of applications, yet it is challenging mainly due to the lack of contextually useful information within. Once debugging symbols (e.g., variable names, types, function names) are discarded, recovering such information is not technically viable with traditional approaches like static or dynamic binary analysis. We focus on a function symbol name recovery, which allows a reverse engineer to gain a quick overview of an unseen binary. The key insight is that a well-developed program labels a meaningful function name that describes its underlying semantics well. In this paper, we present AsmDepictor, the Transformer-based framework that generates a function symbol name from a set of assembly codes (i.e., machine instructions), which consists of three major components: binary code refinement, model training, and inference. To this end, we conduct systematic experiments on the effectiveness of code refinement that can enhance an overall performance. We introduce the per-layer positional embedding and Unique-softmax for AsmDepictor so that both can aid to capture a better relationship between tokens. Lastly, we devise a novel evaluation metric tailored for a short description length, the Jaccard* score. Our empirical evaluation shows that the performance of AsmDepictor by far surpasses that of the state-of-the-art models up to around 400%. The best AsmDepictor model achieves an F1 of 71.5 and Jaccard* of 75.4.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130090358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}