Guenther Starnberger, Lorenz Froihofer, K. M. Göschka
{"title":"Using Smart Cards for Tamper-Proof Timestamps on Untrusted Clients","authors":"Guenther Starnberger, Lorenz Froihofer, K. M. Göschka","doi":"10.1109/ARES.2010.78","DOIUrl":"https://doi.org/10.1109/ARES.2010.78","url":null,"abstract":"Online auctions of governmental bonds and CO2 certificates are challenged by high availability requirements in face of high peak loads around the auction deadline. Traditionally, these requirements are addressed by cluster solutions. However, with strong requirements regarding hardware ownership and only a few auctions per owner per year hardware clusters are a rather ineffective solution.Consequently, we contribute with a solution that alleviates the dependability problems by shifting them into the security domain: Key idea is to provide a secure timestamp service that allows users to place bids locally until the deadline, independent of server availability. This allows to mitigate peak-loads and network or server outages as the transfer of bids to the server can be delayed until after a performance peak or the repair of a failed component.In this paper in particular, we contribute with a secure time synchronization and timestamping protocol tailored to online auctions where we apply secure timestamps on smart cards locally connected to the bidder's computer. Moreover, our timestamping protocol is robust with respect to man-in-the-middle delay attacks. Finally, we prove the feasibility of our approach based on a .NET smart card implementation and conclude with a discussion of current smart card limitations.","PeriodicalId":360339,"journal":{"name":"2010 International Conference on Availability, Reliability and Security","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131479837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unified Public Key Infrastructure Supporting Both Certificate-Based and ID-Based Cryptography","authors":"Byoungcheon Lee","doi":"10.1109/ARES.2010.49","DOIUrl":"https://doi.org/10.1109/ARES.2010.49","url":null,"abstract":"Certificate-based cryptography and ID-based cryptography have been designed under different theoretical backgrounds and they have their own advantages and drawbacks, but there have been few works which try to provide them together in an efficient way. Chen et al. [4] considered a hybrid scheme of public key infrastructure (PKI) and ID-based encryption (IBE), and also discussed various trust relationship among multiple authorities, but they have not discussed more in-depth implementation issues of the hybrid scheme. In ID-based cryptography issuing private keys to users in escrow-free way had been an important issue. Lee et al. [12], [13] proposed a unique private key issuing protocol in the single authority multiple-observer (SAMO) model which can reduce the user authentication load a lot, but these schemes are subject to several attacks due to the lack of verifiable authentication of protocol messages [11].In this paper we show that these two problems can be solved by combining certificate-based and ID-based cryptography. In the proposed scheme certificate is issued to user for user-chosen public key and ID-based private key is issued to user through a private key issuing protocol. In the private key issuing protocol user is authenticated using the certificate and protocol messages are blinded using the certified public key of the user, thus the private key issuing protocol becomes private and also verifiable,which solves the authentication problem of [13].We further present the concept of unified public key infrastructure (UPKI) in which both certificate-based and ID-based cryptosystems are provided to users in a single framework. We also show that if interactions between end users are mainly executed using ID-based cryptography, then end users don’t need to manage other end users’ certificates, which is a great efficiency gain than traditional PKI.","PeriodicalId":360339,"journal":{"name":"2010 International Conference on Availability, Reliability and Security","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122402817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Multi-component View of Digital Forensics","authors":"CP Grobler, CP Louwrens, SH von Solms","doi":"10.1109/ARES.2010.61","DOIUrl":"https://doi.org/10.1109/ARES.2010.61","url":null,"abstract":"We are living in a world where there is an increasing need for evidence in organizations. Good digital evidence is becoming a business enabler. Very few organizations have the structures (management and infrastructure) in place to enable them to conduct cost effective, low-impact and fficient digital investigations [1]. Digital Forensics (DF) is a vehicle that organizations use to provide good and trustworthy evidence and processes. The current DF models concentrate on reactive investigations, with limited reference to DF readiness and live investigations. However, organizations use DF for other purposes for example compliance testing. The paper proposes that DF consists of three components: Pro-active (ProDF), Active (ActDF) and Re-active (ReDF). ProDF concentrates on DF readiness and the proactive responsible use of DF to demonstrate good governance and enhance governance structures. ActDF considers the gathering of live evidence during an ongoing attack with a limited live investigation element whilst ReDF deals with the traditional DF investigation. The paper discusses each component and the relationship between the components.","PeriodicalId":360339,"journal":{"name":"2010 International Conference on Availability, Reliability and Security","volume":"409 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116693969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. N. Franqueira, A. V. Cleeff, P. V. Eck, R. Wieringa
{"title":"External Insider Threat: A Real Security Challenge in Enterprise Value Webs","authors":"V. N. Franqueira, A. V. Cleeff, P. V. Eck, R. Wieringa","doi":"10.1109/ARES.2010.40","DOIUrl":"https://doi.org/10.1109/ARES.2010.40","url":null,"abstract":"Increasingly, organizations collaborate with other organizations in value webs with various arrangements, such as outsourcing, partnering, joint ventures, or subcontracting. As the Jericho Forum (an industry consortium of the Open Group) observed, in all these forms of collaboration, the boundaries between organizations become permeable and, as a consequence, insiders and outsiders can no longer be neatly separated using the notion of a perimeter. Such organizational arrangements have security implications because individuals from the value web are neither outsiders nor completely insiders. To address this phenomenon this paper proposes a third set of individuals, called External Insiders. External insiders add challenges to the already known insider threat problem because, unlike outsiders, external insiders have granted access and are trusted; and, unlike traditional insiders, external insiders are not subjected to as many internal controls enforced by the organization for which they are external insiders. In fact, external insiders are part of two or more organizational control structures, and business-to-business contracts are often insufficiently detailed to establish security requirements at the level of granularity needed to counter the threat they pose.","PeriodicalId":360339,"journal":{"name":"2010 International Conference on Availability, Reliability and Security","volume":"260 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116234598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Adaptive Redundancy Oriented Method to Tolerate Soft Errors in SRAM-Based FPGAs Using Unused Resources","authors":"Somayeh Bahramnejad, H. Zarandi","doi":"10.1109/ARES.2010.60","DOIUrl":"https://doi.org/10.1109/ARES.2010.60","url":null,"abstract":"In this paper, we present an adaptive SEU-tolerance method based on redundancy for implementing circuits in SRAM-based FPGAs to tolerate soft error effects. This method uses unused resources for partial redundancy based on a property of nets called System Failure Rate (SFR). SFR of a given net is the probability of system failure when the net is faulty. The redundancy is performed based on available resources, adaptively, so that system failure rate of circuit implemented in SRAM-based FPGAs decreases. We have investigated the effect of partial redundancy on several MCNC benchmarks. The results show that if maximum tolerable overall overhead is 20%, SFR increases up to 13%.","PeriodicalId":360339,"journal":{"name":"2010 International Conference on Availability, Reliability and Security","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115006465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Security and Usability: Analysis and Evaluation","authors":"Ronald Kainda, I. Flechais, A. W. Roscoe","doi":"10.1109/ARES.2010.77","DOIUrl":"https://doi.org/10.1109/ARES.2010.77","url":null,"abstract":"The differences between the fields of Human-Computer Interaction and Security (HCISec) and Human-Computer Interaction (HCI) have not been investigated very closely. Many HCI methods and procedures have been adopted by HCISec researchers, however the extent to which these apply to the field of HCISec is arguable given the fine balance between improving the ease of use of a secure system and potentially weakening its security. That is to say that the techniques prevalent in HCI are aimed at improving users' effectiveness, efficiency or satisfaction, but they do not take into account the potential threats and vulnerabilities that they can introduce. To address this problem, we propose a security and usability threat model detailing the different factors that are pertinent to the security and usability of secure systems, together with a process for assessing these.","PeriodicalId":360339,"journal":{"name":"2010 International Conference on Availability, Reliability and Security","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129856757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Function Oriented Methodology to Validate and Verify Forensic Copy Function of Digital Forensic Tools","authors":"Yinghua Guo, J. Slay","doi":"10.1109/ARES.2010.16","DOIUrl":"https://doi.org/10.1109/ARES.2010.16","url":null,"abstract":"The growth in the computer forensic field has created a demand for new software (or increased functionality to existing software) and a means to verify that this software is truly forensic i.e. capable of meeting the requirements of the trier of fact. In this work, we present a function oriented testing framework for validation and verification of computer forensic tools. This framework consists of three parts: function mapping, requirements specification and reference set development.Through function mapping, we give a scientific and systemical description of the fundamentals of computer forensic practice, i.e. what functions are needed in the computer forensic investigation process. We focus this paper on the forensic copy function. We specify the requirements and develop and a corresponding reference set to test any tools that possess the forensic copy function.","PeriodicalId":360339,"journal":{"name":"2010 International Conference on Availability, Reliability and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129872018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combining Misuse Cases with Attack Trees and Security Activity Models","authors":"Inger Anne Tøndel, Jostein Jensen, Lillian Røstad","doi":"10.1109/ARES.2010.101","DOIUrl":"https://doi.org/10.1109/ARES.2010.101","url":null,"abstract":"Misuse cases and attack trees have been suggested for security requirements elicitation and threat modeling in software projects. Their use is believed to increase security awareness throughout the software development life cycle. Experiments have identified strengths and weaknesses of both model types. In this paper we present how misuse cases and attack trees can be linked to get a high-level view of the threats towards a system through misuse case diagrams and a more detailed view on each threat through attack trees. Further, we introduce links to security activity descriptions in the form of UML activity graphs. These can be used to describe mitigating security activities for each identified threat. The linking of different models makes most sense when security modeling is supported by tools, and we present the concept of a security repository that is being built to store models and relations such as those presented in this paper.","PeriodicalId":360339,"journal":{"name":"2010 International Conference on Availability, Reliability and Security","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125923805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Solving the Transitive Access Problem for the Services Oriented Architecture","authors":"A. Karp, Jun Li","doi":"10.1109/ARES.2010.34","DOIUrl":"https://doi.org/10.1109/ARES.2010.34","url":null,"abstract":"A key goal of the Services Oriented Architecture is the composition of independently written and managed services. However, managing access to these services has proven to be a problem. A particularly difficult case involves a service that invokes another service to satisfy an initial request. In a number of cases, implementations are able to achieve either the desired functionality or the required security, but not both at the same time. We say that this service composition suffers from the transitive access problem. We show that the problem arises from a poor choice of access control mechanism, one that uses subject authentication to make access decisions, and that the problem does not occur if we use delegatable authorizations.","PeriodicalId":360339,"journal":{"name":"2010 International Conference on Availability, Reliability and Security","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124980478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model-Driven Application-Level Encryption for the Privacy of E-health Data","authors":"Yun Ding, K. Klein","doi":"10.1109/ARES.2010.91","DOIUrl":"https://doi.org/10.1109/ARES.2010.91","url":null,"abstract":"We propose a novel model-driven application-level encryption solution to protect the privacy and confidentiality of health data in response to the growing public concern about the privacy of health data. Domain experts specify sensitive data which are to be protected by encryption in the application’s domain model. Security experts specify the cryptographic parameters used for the encryption in a security configuration. Both specifications are highly flexible to support different granularities of data to be encrypted and appropriate security levels. Based on the domain model, our code generator for Model-Driven Software Development generates code and configuration artifacts to control the encryption and decryption logic in the application and perform database schema modifications. Our encryption infrastructure outside the database (hence, application-level encryption) utilizes the security configuration to perform encryption and decryption.The generator relieves application developers from a significant amount of migration work required by application-level encryption. Hence, our approach combines the flexibility, security and independence from database vendors of application-level encryption and the transparency of database-level encryption. Our model-driven application-level encryption has been integrated into our eHealth Framework, a comprehensive platform for the development of electronic health care solutions. Our approach can be applied to other domains as well.","PeriodicalId":360339,"journal":{"name":"2010 International Conference on Availability, Reliability and Security","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126571011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}