Marcel Schäfer, Sebastian Mair, Waldemar Berchtold, M. Steinebach
{"title":"Universal Threshold Calculation for Fingerprinting Decoders using Mixture Models","authors":"Marcel Schäfer, Sebastian Mair, Waldemar Berchtold, M. Steinebach","doi":"10.1145/2756601.2756611","DOIUrl":"https://doi.org/10.1145/2756601.2756611","url":null,"abstract":"Collusion attacks on watermarked media copies are commonly countered by probabilistically generated fingerprinting codes and appropriate tracing algorithms. The latter calculates accusation scores representing the suspiciousness of the fingerprints. In a 'detect many' scenario a threshold decides which scores are associated to the colluders. This work proposes a universal method to calculate thresholds for different decoders solely with knowledge of the accusation scores from the actual attack. Applying mixture models on the scores, the threshold is set up satisfying the selected error probabilities. It is independent from the fingerprint generation and can be applied at any decoder. Also no knowledge about the number of attackers or their strategy is needed.","PeriodicalId":153680,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116313888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IoT Privacy: Can We Regain Control?","authors":"Richard Chow","doi":"10.1145/2756601.2756623","DOIUrl":"https://doi.org/10.1145/2756601.2756623","url":null,"abstract":"Privacy is part of the Internet of Things (IoT) discussion because of the increased potential for sensitive data collection. In the vision for IoT, sensors penetrate ubiquitously into our physical lives and are funneled into big data systems for analysis. IoT data allows new benefits to end users - but also allows new inferences that erode privacy. The usual privacy mechanisms employed by users no longer work in the context of IoT. Users can no longer turn off a service (e.g., GPS), nor can they even turn off a device and expect to be safe from tracking. IoT means the monitoring and data collection is continuing even in the physical world. On a computer, we have at least a semblance of control and can in principle determine what applications are running and what data they are collecting. For example, on a traditional computer, we do have malware defenses - even if imperfect. Such defenses are strikingly absent for IoT, and it is unclear how traditional defenses can be applied to IoT. The issue of control is the main privacy problem in the context of IoT. Users generally don't know about all the sensors in the environment (with the potential exception of sensors in the user's own home). Present-day examples are WiFi MAC trackers and Google Glass, of course, but systems in the future will become even less discernible. In one sense, this is a security problem - detecting malicious devices or \"environmental malware.\" But it is also a privacy problem - many sensor devices in fact want to be transparent to users (for instance, by adopting a traditional notice-and-consent model), but are blocked by the lack of a natural communication channel to the user. Even assuming communication mechanisms, we have complex usability problems. For instance, we need to understand what sensors a person might be worried about and in what contexts. Audio capture at home is different from audio capture in a lecture hall. What processing is done on the sensor data may also be important. A camera capturing video for purposes of gesture recognition may be less worrisome than for purposes of facial recognition (and, of course, the user needs assurance on the proclaimed processing). Finally, given the large number of \"things\", the problem of notice fatigue must be dealt with, or notifications will become no more useful than browser security warnings. In this talk, we discuss all these problems in detail, together with potential solutions.","PeriodicalId":153680,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129369707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Thumbnail-Preserving Encryption for JPEG","authors":"C. V. Wright, W. Feng, Feng Liu","doi":"10.1145/2756601.2756618","DOIUrl":"https://doi.org/10.1145/2756601.2756618","url":null,"abstract":"With more and more data being stored in the cloud, securing multimedia data is becoming increasingly important. Use of existing encryption methods with cloud services is possible, but makes many web-based applications difficult or impossible to use. In this paper, we propose a new image encryption scheme specially designed to protect JPEG images in cloud photo storage services. Our technique allows efficient reconstruction of an accurate low-resolution thumbnail from the ciphertext image, but aims to prevent the extraction of any more detailed information. This will allow efficient storage and retrieval of image data in the cloud but protect its contents from outside hackers or snooping cloud administrators. Experiments of the proposed approach using an online selfie database show that it can achieve a good balance of privacy, utility, image quality, and file size.","PeriodicalId":153680,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126111654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of Imprecise Knowledge of the Selection Channel on Steganalysis","authors":"V. Sedighi, J. Fridrich","doi":"10.1145/2756601.2756621","DOIUrl":"https://doi.org/10.1145/2756601.2756621","url":null,"abstract":"It has recently been shown that steganalysis of content-adaptive steganography can be improved when the Warden incorporates in her detector the knowledge of the selection channel -- the probabilities with which the individual cover elements were modified during embedding. Such attacks implicitly assume that the Warden knows at least approximately the payload size. In this paper, we study the loss of detection accuracy when the Warden uses a selection channel that was imprecisely determined either due to lack of information or the stego changes themselves. The loss is investigated for two types of qualitatively different detectors -- binary classifiers equipped with selection-channel-aware rich models and optimal detectors derived using the theory of hypothesis testing from a cover model. Two different embedding paradigms are addressed -- steganography based on minimizing distortion and embedding that minimizes the detectability of an optimal detector within a chosen cover model. Remarkably, the experimental and theoretical evidence are qualitatively in agreement across different embedding methods, and both point out that inaccuracies in the selection channel do not have a strong effect on steganalysis detection errors. It pays off to use imprecise selection channel rather than none. Our findings validate the use of selection-channel-aware detectors in practice.","PeriodicalId":153680,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122958671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SATTVA: SpArsiTy inspired classificaTion of malware VAriants","authors":"L. Nataraj, S. Karthikeyan, B. S. Manjunath","doi":"10.1145/2756601.2756616","DOIUrl":"https://doi.org/10.1145/2756601.2756616","url":null,"abstract":"There is an alarming increase in the amount of malware that is generated today. However, several studies have shown that most of these new malware are just variants of existing ones. Fast detection of these variants plays an effective role in thwarting new attacks. In this paper, we propose a novel approach to detect malware variants using a sparse representation framework. Exploiting the fact that most malware variants have small differences in their structure, we model a new/unknown malware sample as a sparse linear combination of other malware in the training set. The class with the least residual error is assigned to the unknown malware. Experiments on two standard malware datasets, Malheur dataset and Malimg dataset, show that our method outperforms current state of the art approaches and achieves a classification accuracy of 98.55% and 92.83% respectively. Further, by using a confidence measure to reject outliers, we obtain 100% accuracy on both datasets, at the expense of throwing away a small percentage of outliers. Finally, we evaluate our technique on two large scale malware datasets: Offensive Computing dataset (2,124 classes, 42,480 malware) and Anubis dataset (209 classes, 36,784 samples). On both datasets our method obtained an average classification accuracy of 77%, thus making it applicable to real world malware classification.","PeriodicalId":153680,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131373257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Sensor Pattern Noise for Source Camera Identification: An Empirical Evaluation","authors":"Bei-Bei Liu, Xingjie Wei, Jeff Yan","doi":"10.1145/2756601.2756614","DOIUrl":"https://doi.org/10.1145/2756601.2756614","url":null,"abstract":"The sensor pattern noise (SPN) based source camera identification technique has been well established. The common practice is to subtract a denoised image from the original one to get an estimate of the SPN. Various techniques to improve SPN's reliability have previously been proposed. Identifying the most effective technique is important, for both researchers and forensic investigators in law enforcement agencies. Unfortunately, the results from previous studies have proven to be irreproducible and incomparable dash there is no consensus on which technique works the best. Here, we extensively evaluate various ways of enhancing the SPN by using the public Dresden database. We identify which enhancing methods are more effective and offer some insights into the behavior of SPN. For example, we find that the most effective enhancing methods share a common strategy of spectrum flattening. We also show that methods that only aim at reducing the contamination from image content do not lead to satisfying results, since the non-unique artifacts (NUA) among different cameras are the major troublemaker to the identification performance. While there is a trend of employing sophisticate methods to predict the impact of image content, our results suggest that more effort should be invested to tame the NUAs.","PeriodicalId":153680,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security","volume":"91 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117296356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Arndt, Stefan Kiltz, J. Dittmann, R. Fischer
{"title":"ForeMan, a Versatile and Extensible Database System for Digitized Forensics Based on Benchmarking Properties","authors":"Christian Arndt, Stefan Kiltz, J. Dittmann, R. Fischer","doi":"10.1145/2756601.2756615","DOIUrl":"https://doi.org/10.1145/2756601.2756615","url":null,"abstract":"To benefit from new opportunities offered by the digitalization of forensic disciplines, the challenges especially w.r.t. comprehensibility and searchability have to be met. Important tools in this forensic process are databases containing digitized representations of physical crime scene traces. We present ForeMan, an extensible database system for digitized forensics handling separate databases and enabling intra and inter trace type searches. It now contains 762 fiber data sets and 27 fingerprint data sets (anonymized time series). Requirements of the digitized forensic process model are mapped to design aspects and conceptually modeled around benchmarking properties. A fiber categorization scheme is used to structure fiber data according to forensic use case identification. Our research extends the benchmarking properties by fiber fold shape derived from the application field of fibers (part of micro traces) and sequence number derived from the application field of time series analysis for fingerprint aging research. We identify matching data subsets from both digitized trace types and introduce the terms of entity-centered and spatial-centered information. We show how combining two types of digitized crime scene traces (fiber and fingerprint data) can give new insights for research and casework and discuss requirements for other trace types such as firearm and toolmarks.","PeriodicalId":153680,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129321231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yao Shen, Liusheng Huang, Fei Wang, Xiaorong Lu, Wei Yang, Lu Li
{"title":"LiHB: Lost in HTTP Behaviors - A Behavior-Based Covert Channel in HTTP","authors":"Yao Shen, Liusheng Huang, Fei Wang, Xiaorong Lu, Wei Yang, Lu Li","doi":"10.1145/2756601.2756605","DOIUrl":"https://doi.org/10.1145/2756601.2756605","url":null,"abstract":"The application-layer covert channels have been extensively studied in recent years. Information-hiding in ubiquitous application packets can significantly improve the capacity of covert channels. However, the undetectability is still a knotty problem, because the existing covert channels are all frustrated by proper detection schemes. In this paper, we propose LiHB, a behavior-based covert channel in HTTP. When a client is browsing a website and downloading webpage objects, we can reveal some fluctuation behaviors that the distribution relationship between the ports opening and HTTP requests are flexible. Based on combinatorial nature of distributing N HTTP requests over M HTTP flows, such fluctuation can be exploited by LiHB channel to encode covert messages, which can obtain high stealthiness. Besides, LiHB achieves a considerable and controllable capacity by setting the number of webpage objects and HTTP flows. Compared with existing techniques, LiHB is the first covert channel implemented based on the unsuspicious behavior of browsers, the most important application-layer software. Because most HTTP proxies are using NAPT techniques, LiHB can also operate well even when a proxy is equipped, which poses a serious threat to individual privacy. Experimental results show that LiHB covert channel achieves a good capacity, reliability and high undetectability.","PeriodicalId":153680,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127062739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Touch-based Static Authentication Using a Virtual Grid","authors":"W. Bond, A. AhmedAwadE.","doi":"10.1145/2756601.2756602","DOIUrl":"https://doi.org/10.1145/2756601.2756602","url":null,"abstract":"Keystroke dynamics is a subfield of computer security in which the cadence of the typist's keystrokes are used to determine authenticity. The static variety of keystroke dynamics uses typing patterns observed during the typing of a password or passphrase. This paper presents a technique for static authentication on mobile tablet devices using neural networks for analysis of keystroke metrics. Metrics used in the analysis of typing are monographs, digraphs, and trigraphs. Monographs as we define them consist of the time between the press and release of a single key, coupled with the discretized x-y location of the keystroke on the tablet. A digraph is the duration between the presses of two consecutively pressed keys, and a trigraph is the duration between the press of a key and the press of a key two keys later. Our technique combines the analysis of monographs, digraphs, and trigraphs to produce a confidence measure. Our best equal error rate for distinguishing users from impostors is 9.3% for text typing, and 9.0% for a custom experiment setup that is discussed in detail in the paper.","PeriodicalId":153680,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127752344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"End-to-Display Encryption: A Pixel-Domain Encryption with Security Benefit","authors":"S. Burg, Dustin Peterson, O. Bringmann","doi":"10.1145/2756601.2756613","DOIUrl":"https://doi.org/10.1145/2756601.2756613","url":null,"abstract":"Providing secure access to confidential information is extremely difficult, notably when regarding weak endpoints and users. With the increasing number of corporate espionage cases and data leaks, a usable approach enhancing the security of data on endpoints is needed. In this paper we present our implementation for providing a new level of security for confidential documents that are viewed on a display. We call this End-to-Display Encryption (E2DE). E2DE encrypts images in the pixel-domain before transmitting them to the user. These images can then be displayed by arbitrary image viewers and are sent to the display. On the way to the display, the data stream is analyzed and the encrypted pixels are decrypted depending on a private key stored on a chip card inserted in the receiver, creating a viewable representation of the confidential data on the display, without decrypting the information on the computer itself. We implemented a prototype on a Digilent Atlys FPGA Board supporting resolutions up to Full HD.","PeriodicalId":153680,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131077978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}