{"title":"A development of multi-SSO authentication and RBAC model in the distributed systems","authors":"S. Fugkeaw, P. Manpanpanich, S. Juntapremjitt","doi":"10.1109/ICDIM.2007.4444239","DOIUrl":"https://doi.org/10.1109/ICDIM.2007.4444239","url":null,"abstract":"This paper proposes the design and development of SSO two factor authentication and RBAC authorization in the multiple applications and multi-domain environment. The authentication and authorization are based on the X.509 public key certificate and privilege management infrastructure (PMI). In our model, the security assertion markup language (SAML) is adopted to support the exchange of authentication and authorization information. SAML enables the single sign-on (SSO) authentication in the federation environment to be more manageable and scalable. This is required for the distributed computing systems where the strong authentication and dynamic authorization are needed. Finally, we presented our ongoing implementation status and demonstrated that our proposed model serves as another practical solution in implementing the dynamic RBAC policy management in the multiple SSO and PKI domains.","PeriodicalId":198626,"journal":{"name":"2007 2nd International Conference on Digital Information Management","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114470039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Consideration of experimental evaluation about encrypted replica update process","authors":"Kazuki Takayama, D. Kobayashi, H. Yokota","doi":"10.1109/ICDIM.2007.4444280","DOIUrl":"https://doi.org/10.1109/ICDIM.2007.4444280","url":null,"abstract":"The secure storage systems adopting the encrypt-on-disk scheme, in which files are stored in cipher for efficient data transmission, need to re-encrypt files with new cryptographic keys when a revocation occurs. There are two re-encryption methods, namely active revocation in which the re-encryption is immediately performed and lazy revocation in which the re-encryption is delayed until the file is updated. There is the trade-off between performance and security because active revocation has the expense of immediate re-encryption, while lazy revocation is vulnerable during its re-encryption delay. We consider the environment in which re-encrypted file is pre-computed by using backup data in a parallel storage system effective for this issue. However, the performance of update is decreased on account of the difference of keys used in primary and backup. In this paper, we evaluate a method in which the differential data re-encrypted for backup are not written to the file but be kept on the memory in different key environment, and evaluate the different key environment in parallel storage by experiment.","PeriodicalId":198626,"journal":{"name":"2007 2nd International Conference on Digital Information Management","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114679750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Long term digital preservation - An end user’s perspective","authors":"Julie Doyle, H. Viktor, E. Paquet","doi":"10.1109/ICDIM.2007.4444215","DOIUrl":"https://doi.org/10.1109/ICDIM.2007.4444215","url":null,"abstract":"The issue of long term preservation of digital data has, in recent years, become a critical issue with many diverse groups and organisations recognising the need to preserve digital documents before they fall victim to digital obsolescence. However, there is a distinct lack of discussion regarding the needs of future end users of such preserved documents. This is surprising, given that it is the end user who ultimately determines what information should be archived and in what way. Furthermore, it is essential that a preserved document retains its authenticity and usability through time and is easily interpreted by future end users. To this extent, we present in this paper details of our case study: an emulation framework to preserve 2D and 3D anthropometric data. We describe the testing of this emulation environment, by anthropometric experts, against the original environment and compare the two based on a given set of usability and authenticity criteria.","PeriodicalId":198626,"journal":{"name":"2007 2nd International Conference on Digital Information Management","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127554261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Context-aware security service in RFID/USN environments using MAUT and extended GRBAC","authors":"Kiyeal Lee, Seokhwan Yang, Sungik Jun, Mokdong Chung","doi":"10.1109/ICDIM.2007.4444240","DOIUrl":"https://doi.org/10.1109/ICDIM.2007.4444240","url":null,"abstract":"This paper proposes a context-aware security service providing multiple authentications and authorization from a Security Level which is decided dynamically in a context-aware environment. It helps developers build secure services efficiently. A security service in a dynamic environment uses Multi-Attribute Utility Theory and extended Generalized Role-Based Access Control. The system uses attribute values in GRBAC to calculate the Security Level, and extend the GRBAC. We expect this model to be widely used in providing flexible security services in a heterogeneous network.","PeriodicalId":198626,"journal":{"name":"2007 2nd International Conference on Digital Information Management","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128070106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PhotoMot, collaborative image management. Interacting with use traces","authors":"Elöd Egyed-Zsigmond, Zoltán Iszlai, Sonia Lajmi","doi":"10.1109/ICDIM.2007.4444208","DOIUrl":"https://doi.org/10.1109/ICDIM.2007.4444208","url":null,"abstract":"PhotoMot is an online collaborative image management system which traces the actions of users and provides them assistance based on the capitalized experience according to the case based reasoning paradigm. How to imply the user in the assistance a system provides? PhotoMot shows the trace of user actions being built and lets users interact on the manner they are followed. This way they can help the system provide pertinent assistance. Manual image annotation and keyword based image search is a difficult and hardly formalizable problem. In this paper we present a model and a system which addresses this issue. The system provides basic online image gallery features: uploading single or zipped images, describing images with keywords, managing galleries ... In addition the system helps image annotation tracing the actions of the user in order to propose additional keywords and images as well in the description as in the search phase. This help is provided by the capitalization and the reuse of users' experience based on the case based reasoning paradigm. After a short introduction we describe our use trace model, the assistance strategies and present the developed prototype.","PeriodicalId":198626,"journal":{"name":"2007 2nd International Conference on Digital Information Management","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124932828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hardening digital signatures against untrusted signature software","authors":"F. Buccafurri, G. Lax","doi":"10.1109/ICDIM.2007.4444217","DOIUrl":"https://doi.org/10.1109/ICDIM.2007.4444217","url":null,"abstract":"Digital signature is nowadays a consolidated machinery allowing the management of electronic documents with full legal power. In this scenario, digital signature represents thus the key issue on every process of document de- materialization toward which both private and public organizations, as well as simple citizens, are moving quickly. Unfortunately, digital signature suffers from a severe vulnerability, directly deriving from the potential untrustworthy of the platform where the signature generation process runs. Indeed, the usage of secure smart cards does not eliminate the necessity of interfacing them with the PC. allowing the attacker to poison the PC itself to obtain signed documents with no intention from the subscriber. The problem is inherently unsolvable, provided that the current signature mechanism, as well as its legal value, are maintained. In this paper we give a solution with nice backward compatibility properties, working as a full solution in a restricted (but probable) set of untrustworthy cases, and mitigating the problem in the more general case.","PeriodicalId":198626,"journal":{"name":"2007 2nd International Conference on Digital Information Management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124489375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quality of IT support for corporate environmental management: A paradigmatic framework","authors":"S. Corea, M. Levy","doi":"10.1109/ICDIM.2007.4444260","DOIUrl":"https://doi.org/10.1109/ICDIM.2007.4444260","url":null,"abstract":"Organisations face increasing pressures to deploy appropriate environmental sustainability practices and reduce their ecological impacts. The use of information technologies (IT) offer much potential for supporting such practices, through data processing and reporting mechanisms, as well as being a platform for business redesign. This theoretical paper argues that in order to evaluate the quality of IT support in this area, critical consideration should be given to the 'environmental paradigm' that is serving as the basis of evaluation i.e. the underlying assumptions/model of what constitutes environmental sustainability in the modern enterprise. Drawing from a synthesis of relevant literature, this paper presents a framework that distinguishes three environmental paradigms, and the salient criteria that they suggest as the basis for assessing the quality of IT support in this arena. The criteria seen to be important are: eco-efficiency, eco-efficacy and eco-effectiveness.","PeriodicalId":198626,"journal":{"name":"2007 2nd International Conference on Digital Information Management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130502291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Beneventano, S. Bergamaschi, F. Guerra, M. Vincini
{"title":"The SEWASIE MAS for semantic search","authors":"D. Beneventano, S. Bergamaschi, F. Guerra, M. Vincini","doi":"10.1109/ICDIM.2007.4444321","DOIUrl":"https://doi.org/10.1109/ICDIM.2007.4444321","url":null,"abstract":"The capillary diffusion of the Internet has made available access to an overwhelming amount of data, allowing users having benefit of vast information. However, information is not really directly available: internet data are heterogeneous and spread over different places, with several duplications, and inconsistencies. The integration of such heterogeneous inconsistent data, with data reconciliation and data fusion techniques, may therefore represent a key activity enabling a more organized and semantically meaningful access to data sources. Some issues are to be solved concerning in particular the discovery and the explicit specification of the relationships between abstract data concepts and the need for data reliability in dynamic, constantly changing network. Ontologies provide a key mechanism for solving these challenges, but the web’s dynamic nature leaves open the question of how to manage them. Many solutions based on ontology creation by a mediator system have been proposed: a unified virtual view (the ontology) of the underlying data sources is obtained giving to the users a transparent access to the integrated data sources [1, 2, 3]. The centralized architecture of a mediator system presents several limitations, emphasized in the hidden web [4]: firstly, web data sources hold information according to their particular view of the matter, i.e. each of them uses a specific ontology to represent its data. Also, data sources are usually isolated, i.e. they do not share any topological information concerning the content or structure of other sources. Our proposal is to develop a network of ontology-based mediator systems, where mediators are not isolated from each other and include tools for sharing and mapping their ontologies. In this paper, we describe the use of a multi-agent architecture to achieve and manage the mediators network. The functional architecture is composed of single peers (implemented as","PeriodicalId":198626,"journal":{"name":"2007 2nd International Conference on Digital Information Management","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133323899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A three-layered XML-based context model","authors":"S. Mostéfaoui","doi":"10.1109/ICDIM.2007.4444302","DOIUrl":"https://doi.org/10.1109/ICDIM.2007.4444302","url":null,"abstract":"Context modelling is a critical step in the design of context-aware applications. Indeed, a proper modelling approach can provide mechanisms enabling accurate and high-quality contextual information in pervasive environments. In this paper, we present our generic three-layered data model for context. The conceptual and physical layers built on top of a XML schema layer constitute the backbone of our approach.","PeriodicalId":198626,"journal":{"name":"2007 2nd International Conference on Digital Information Management","volume":"12 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132375842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nhien-An Le-Khac, Lamine M. Aouad, Mohand Tahar Kechadi
{"title":"An efficient support management tool for distributed data mining environments","authors":"Nhien-An Le-Khac, Lamine M. Aouad, Mohand Tahar Kechadi","doi":"10.1109/ICDIM.2007.4444235","DOIUrl":"https://doi.org/10.1109/ICDIM.2007.4444235","url":null,"abstract":"Today, a deluge of data is collected from different fields. These massive amounts of data which are often geographically distributed and owned by different organisations are being mined. As consequence, a large mount of knowledge is being produced. This causes the problem of efficient knowledge management in distributed data mining (DDM). The main aim of DDM is to exploit fully the benefit of distributed data analysis while minimising the communication overhead. Existing DDM techniques perform partial analysis of local data at individual sites and then generate global models by aggregating the local results. These two steps are not independent since naive approaches to local analysis may produce incorrect and ambiguous global data models. To overcome this problem, we present a tool called \"knowledge map \" to easily and efficiently represent knowledge built from mining process in a large scale distributed platform such as Grid. This will also facilitate the integration/coordination of local mining processes and existing knowledge to increase the accuracy of the final models. This approach is being tested on very large datasets.","PeriodicalId":198626,"journal":{"name":"2007 2nd International Conference on Digital Information Management","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131824370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}