Christopher Ireland, D. Bowers, M. Newton, K. Waugh
{"title":"A Classification of Object-Relational Impedance Mismatch","authors":"Christopher Ireland, D. Bowers, M. Newton, K. Waugh","doi":"10.1109/DBKDA.2009.11","DOIUrl":"https://doi.org/10.1109/DBKDA.2009.11","url":null,"abstract":"Object and relational technologies are grounded in different paradigms. Each technology mandates that those who use it take a particular view of a universe of discourse. Incompatibilities between these views manifest as problems of an object-relational impedance mismatch. In this paper we propose a conceptual framework for the problem space of object-relational impedance mismatch and consequently distinguish four kinds of impedance mismatch. We show that each kind of impedance mismatch needs to be addressed using a different object-relational mapping strategy. Our framework provides a mechanism to explore issues of fidelity, integrity and completeness in the design and implementation of existing and new strategy choices. Our framework will be of benefit to standards bodies, tool vendors, designers and programmers as it will provide them with new insights into how to address problems of an object-relational impedance mismatch both at the most appropriate levels of abstraction and in the most appropriate way.","PeriodicalId":231150,"journal":{"name":"2009 First International Confernce on Advances in Databases, Knowledge, and Data Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121107076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Input Buffers for Streaming XSLT Processing","authors":"J. Dvořáková, F. Zavoral","doi":"10.1109/DBKDA.2009.25","DOIUrl":"https://doi.org/10.1109/DBKDA.2009.25","url":null,"abstract":"We present a buffering streaming engine for processing top-down XSLT transformations. It consists of an analyzer and a transformer.The analyzer examines given top-down XSLT and XSD, and generates fragments which identify parts of XSD need to be buffered when XSLT is applied. The fragments are passed to the transformer which processes XSLT on an input XML document conforming to XSD. It uses auxiliary memory buffers to store temporary data and buffering is controlled according to the fragments. We describe implementation of the engine within the Xord framework and provide evaluation tests which show that the new engine is much more memory-efficient comparing to the common XSLT processors.","PeriodicalId":231150,"journal":{"name":"2009 First International Confernce on Advances in Databases, Knowledge, and Data Applications","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116594091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Range-Sum Queries along Dimensional Hierarchies in Data Cubes","authors":"T. Lauer, Dominic Mai, P. Hagedorn","doi":"10.1109/DBKDA.2009.18","DOIUrl":"https://doi.org/10.1109/DBKDA.2009.18","url":null,"abstract":"Fast response to users’ query and update requests continues to be one of the key requirements for OLAP systems. We outline the generalization of a space-efficient data structure, which makes it particularly suited for cubes with hierarchically structured dimensions. For a large class of range-sum queries – roll-up and drill-down along dimension hierarchies – the structure requires only a constant number of cell accesses per query on average, while offering an expected poly-logarithmic update performance.","PeriodicalId":231150,"journal":{"name":"2009 First International Confernce on Advances in Databases, Knowledge, and Data Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130386617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chimène Fankam, Stéphane Jean, G. Pierra, Ladjel Bellatreche, Y. A. Ameur
{"title":"Towards Connecting Database Applications to Ontologies","authors":"Chimène Fankam, Stéphane Jean, G. Pierra, Ladjel Bellatreche, Y. A. Ameur","doi":"10.1109/DBKDA.2009.22","DOIUrl":"https://doi.org/10.1109/DBKDA.2009.22","url":null,"abstract":"Most database applications are designed according the ANSI/SPARC architecture. When it is used a large amount of semantics of data may be lost during the transformation from the conceptual model to a logical model. As a consequence exchanging/integrating various databases or generating user interfaces for data access become difficult. Ontologies seem an interesting solution to solve these problems, since they allow making explicit the semantics of data. In this paper, we propose an ontology-based approach for designing database applications, and then, for representing explicitly the semantics of data within the database. It consists in extending the ANSI/SPARC architecture with the ontological level. Note that this extension may also be added to existing applications designed according to the ANSI/SPARC architecture, since it preserves an upward compatibility.","PeriodicalId":231150,"journal":{"name":"2009 First International Confernce on Advances in Databases, Knowledge, and Data Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130969892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hagen Höpfner, Jörg Schad, S. Wendland, Essam Mansour
{"title":"MyMIDP: An JDBC Driver for Accessing MySQL from Mobile Devices","authors":"Hagen Höpfner, Jörg Schad, S. Wendland, Essam Mansour","doi":"10.1109/DBKDA.2009.33","DOIUrl":"https://doi.org/10.1109/DBKDA.2009.33","url":null,"abstract":"Cell phones are no longer merely used to make phone calls or to send short or multimedia messages. They more and more become information systems clients. Recent developments in the areas of mobile computing, wireless networks and information systems provide access to data at almost every place and anytime by using this kind of lightweight mobile device. But even though mobile clients support the Java Mobile Edition or the .NET Micro Framework, most information systems for mobile clients require a middle-ware that handles data communication. Java’s JDBC provides a standard way to access databases in Java,but this interface is missing in Java ME. In this paper we present our implementation of an MIDP-based Java ME driver for My SQL similar to JDBC that allows direct communication of MIDP applications to My SQL servers without a middleware.","PeriodicalId":231150,"journal":{"name":"2009 First International Confernce on Advances in Databases, Knowledge, and Data Applications","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132138831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Applying Map-Reduce Paradigm for Parallel Closed Cube Computation","authors":"Kuznecov Sergey, K. Yury","doi":"10.1109/DBKDA.2009.32","DOIUrl":"https://doi.org/10.1109/DBKDA.2009.32","url":null,"abstract":"After many years of studies, efficient data cube computation remains an open field of research due to ever-growing amounts of data. One of the most efficient algorithms (quotient cubes) is based on the notion of cube cells closure, condensing groups of cells into equivalence classes, which allows to losslessly decrease amount of data to be stored. Recently developed parallel computation framework Map-Reduce lead to a new wave of interest to large-scale algorithms for data analysis (and to so called cloud-computing paradigm). This paper is devoted to applying such approaches to data and computation intensive task of OLAP-cube computation. We show that there are two scales of Map-Reduce applicability (for local multicore or multiprocessor server and multi-server clusters), present cube construction and query processing algorithms used at the both levels. Experimental results demonstrate that algorithms are scalable.","PeriodicalId":231150,"journal":{"name":"2009 First International Confernce on Advances in Databases, Knowledge, and Data Applications","volume":"85 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128840558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Adaptive Synchronization Policy for Harvesting OAI-PMH Repositories","authors":"N. Adly","doi":"10.1109/DBKDA.2009.9","DOIUrl":"https://doi.org/10.1109/DBKDA.2009.9","url":null,"abstract":"Metadata harvesting requires timely propagation of up-to-date information from thousands of Repositories over a wide area network. It is desirable to keep the data as fresh as possible while observing the overhead on the Harvester. An important dimension to be considered is that Repositories vary widely in their update patterns; they may experience different update rates at different times or unexpected changes to update patterns. In this paper, we define data Freshness metrics and propose an adaptive algorithm for the synchronization of the Harvester with the Repositories. The algorithm is based on meeting a desired level of Freshness while incurring the minimum overhead on the Harvester. We present a comparison between different policies for the synchronization within the framework devised. It is shown that the proposed policy outperform the other policies, especially for heterogeneous update patterns.","PeriodicalId":231150,"journal":{"name":"2009 First International Confernce on Advances in Databases, Knowledge, and Data Applications","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128318581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Farahmand Nejad, S. Kharazmi, Shahab Bayati, S. Golmohammadi, H. Abolhassani
{"title":"Semantic Log Based Replication Model for Optimizing Heterogeneous DBMS Interaction","authors":"Ali Farahmand Nejad, S. Kharazmi, Shahab Bayati, S. Golmohammadi, H. Abolhassani","doi":"10.1109/DBKDA.2009.24","DOIUrl":"https://doi.org/10.1109/DBKDA.2009.24","url":null,"abstract":"The growth of database application usage requires Database Management Systems (DBMS) that are accessible, reliable, and dependable. One approach to handle these requirements is replication mechanism.Replication mechanism can be divided into various categories. Some related works consider two categories for replication mechanisms: heterogeneous and homogenous however majority of them classify them in three groups: physical, trigger-based and log- based schema. Log-based replication mechanisms are the most widely used category among DBMS vendors.Adapting such approach for heterogeneous systems is a complex task, because of lack of log understanding in the other end. Semantic technologies provide a suitable framework to address heterogeneity problems in large scale and dynamic resources. In this paper we introduce a new approach to tackle replication problem in a heterogeneous environment by utilizing ontologies.","PeriodicalId":231150,"journal":{"name":"2009 First International Confernce on Advances in Databases, Knowledge, and Data Applications","volume":"376 1-6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131990984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Chen, Chunmei Liu, L. Burge, Mahmood Mohammad, W. Southerland, C.S. Gloster, Bing Wang
{"title":"IRCDB: A Database of Inter-residues Contacts in Protein Chains","authors":"Peng Chen, Chunmei Liu, L. Burge, Mahmood Mohammad, W. Southerland, C.S. Gloster, Bing Wang","doi":"10.1109/DBKDA.2009.27","DOIUrl":"https://doi.org/10.1109/DBKDA.2009.27","url":null,"abstract":"In protein structure prediction, identifying the inter-residue contacts is a very important task to understand the mechanism of complicated protein folding and therefore to predict three-dimensional structures of proteins. So far, many methods were developed to predict inter-residue contacts. However, no special database consisting of detailed inter-residue contacts for each PDB protein chain has been built. Our database of inter-residue contacts in protein chains consists of protein chains extracted from PDB database. For each protein chain, we analyzed its inter-residue contacts, classified it into one class, and obtained several groups of inter-residues contacts.","PeriodicalId":231150,"journal":{"name":"2009 First International Confernce on Advances in Databases, Knowledge, and Data Applications","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116859733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Source-Aware Repairs for Inconsistent Databases","authors":"N. Viswanath, Rajshekhar Sunderraman","doi":"10.1109/DBKDA.2009.15","DOIUrl":"https://doi.org/10.1109/DBKDA.2009.15","url":null,"abstract":"The problem of extracting consistent query answers from an inconsistent database has been mainly approached from two directions: “repairing” the database or rewriting queries so that only consistent answers are returned. Logic programming with explicit negation has been widely used in order to specify repairs such that each answer set of the repair program corresponds to a repair. In this paper, we explore the problem of obtaining so called “preferred repairs” from a database that is both inconsistent and incomplete based on preferences for the source from which the information is obtained. We show how a preferred repair might be specified using logic programs when source information is available.","PeriodicalId":231150,"journal":{"name":"2009 First International Confernce on Advances in Databases, Knowledge, and Data Applications","volume":"424 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115643833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}