{"title":"Exploring Widevine for Fun and Profit","authors":"Gwendal Patat, M. Sabt, Pierre-Alain Fouque","doi":"10.48550/arXiv.2204.09298","DOIUrl":"https://doi.org/10.48550/arXiv.2204.09298","url":null,"abstract":"For years, Digital Right Management (DRM) systems have been used as the go-to solution for media content protection against piracy. With the growing consumption of content using Over-the-Top platforms, such as Netflix or Prime Video, DRMs have been deployed on numerous devices considered as potential hostile environments. In this paper, we focus on the most widespread solution, the closed-source Widevine DRM.Installed on billions of devices, Widevine relies on crypto-graphic operations to protect content. Our work presents a study of Widevine internals on Android, mapping its distinct components and bringing out its different cryptographic keys involved in content decryption. We provide a structural view of Widevine as a protocol with its complete key ladder. Based on our insights, we develop WideXtractor, a tool based on Frida to trace Widevine function calls and intercept messages for inspection. Using this tool, we analyze Netflix usage of Widevine as a proof-of-concept, and raised privacy concerns on user-tracking. In addition, we leverage our knowledge to bypass the obfuscation of Android Widevine software-only version, namely L3, and recover its Root-of-Trust.","PeriodicalId":334852,"journal":{"name":"2022 IEEE Security and Privacy Workshops (SPW)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122142339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike","authors":"Johannes Schneider, Giovanni Apruzzese","doi":"10.48550/arXiv.2203.10166","DOIUrl":"https://doi.org/10.48550/arXiv.2203.10166","url":null,"abstract":"We propose to generate adversarial samples by modifying activations of upper layers encoding semantically meaningful concepts. The original sample is shifted towards a target sample, yielding an adversarial sample, by using the modified activations to reconstruct the original sample. A human might (and possibly should) notice differences between the original and the adversarial sample. Depending on the attacker-provided constraints, an adversarial sample can exhibit subtle differences or appear like a \"forged\" sample from another class. Our approach and goal are in stark contrast to common attacks involving perturbations of single pixels that are not recognizable by humans. Our approach is relevant in, e.g., multistage processing of inputs, where both humans and machines are involved in decision-making because invisible perturbations will not fool a human. Our evaluation focuses on deep neural networks. We also show the transferability of our adversarial examples among networks.","PeriodicalId":334852,"journal":{"name":"2022 IEEE Security and Privacy Workshops (SPW)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124887703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Robinson, Le Li, Cory Anderson, Steve Huntsman
{"title":"Statistical detection of format dialects using the weighted Dowker complex","authors":"Michael Robinson, Le Li, Cory Anderson, Steve Huntsman","doi":"10.1109/SPW54247.2022.9833862","DOIUrl":"https://doi.org/10.1109/SPW54247.2022.9833862","url":null,"abstract":"This paper provides an experimentally validated, probabilistic model of file behavior when consumed by a set of pre-existing parsers. File behavior is measured by way of a standardized set of Boolean \"messages\" produced as the files are read. By thresholding the posterior probability that a file exhibiting a particular set of messages is from a particular dialect, our model yields a practical classification algorithm for two dialects. We demonstrate that this thresholding algorithm for two dialects can be bootstrapped from a training set consisting primarily of one dialect. Both the theoretical and the empirical distributions of file behaviors for one dialect yield good classification performance, and outperform classification based on simply counting messages.Our theoretical framework relies on statistical independence of messages within each dialect. Violations of this assumption are detectable and allow a format analyst to identify \"boundaries\" between dialects. By restricting their attention to the files that lie within these boundaries, format analysts can more efficiently craft new criteria for dialect detection.","PeriodicalId":334852,"journal":{"name":"2022 IEEE Security and Privacy Workshops (SPW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130230190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parameterizing Activation Functions for Adversarial Robustness","authors":"Sihui Dai, Saeed Mahloujifar, Prateek Mittal","doi":"10.1109/spw54247.2022.9833884","DOIUrl":"https://doi.org/10.1109/spw54247.2022.9833884","url":null,"abstract":"Deep neural networks are known to be vulnerable to adversarially perturbed inputs. A commonly used defense is adversarial training, whose performance is influenced by model architecture. While previous works have studied the impact of varying model width and depth on robustness, the impact of using learnable parametric activation functions (PAFs) has not been studied. We study how using learnable PAFs can improve robustness in conjunction with adversarial training. We first ask the question: Can changing activation function shape improve robustness? To address this, we choose a set of PAFs with parameters that allow us to independently control behavior on negative inputs, inputs near zero, and positive inputs. Using these PAFs, we train models using adversarial training with fixed PAF shape parameter values. We find that all regions of PAF shape influence the robustness of obtained models, however only variation in certain regions (inputs near zero, positive inputs) can improve robustness over ReLU. We then combine learnable PAFs with adversarial training and analyze robust performance. We find that choice of activation function can significantly impact the robustness of the trained model. We find that only certain PAFs, such as smooth PAFs, are able to improve robustness significantly over ReLU. Overall, our work puts into context the importance of activation functions in adversarially trained models.","PeriodicalId":334852,"journal":{"name":"2022 IEEE Security and Privacy Workshops (SPW)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132883012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Anomaly Detection with Neural Parsers That Never Reject","authors":"Alexander Grushin, Walt Woods","doi":"10.1109/spw54247.2022.9833865","DOIUrl":"https://doi.org/10.1109/spw54247.2022.9833865","url":null,"abstract":"Reinforcement learning has recently shown promise as a technique for training an artificial neural network to parse sentences in some unknown format, through a body of work known as RL-GRIT. A key aspect of the RL-GRIT approach is that rather than explicitly inferring a grammar that describes the format, the neural network learns to perform various parsing actions (such as merging two tokens) over a corpus of sentences, with the goal of maximizing the estimated frequency of the resulting parse structures. This can allow the learning process to more easily explore different action choices, since a given choice may change the optimality of the parse (as expressed by the total reward), but will not result in the failure to parse a sentence. However, this also presents a limitation: because the trained neural network can successfully parse any sentence, it cannot be directly used to identify sentences that deviate from the format of the training sentences, i.e., that are anomalous. In this paper, we address this limitation by presenting procedures for extracting production rules from the neural network, and for using these rules to determine whether a given sentence is nominal or anomalous. When a sentence is anomalous, an attempt is made to identify the location of the anomaly. We empirically demonstrate that our approach is capable of grammatical inference and anomaly detection for both non-regular formats and those containing regions of high randomness/entropy. While a format with high randomness typically requires large sets of production rules, we propose a two pass grammatical inference method to generate parsimonious rule sets for such formats. By further improving parser learning, and leveraging the presented rule extraction and anomaly detection algorithms, one might begin to understand common errors, either benign or malicious, in practical formats.","PeriodicalId":334852,"journal":{"name":"2022 IEEE Security and Privacy Workshops (SPW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130845519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}