{"title":"A New Algorithm for the Containment Problem of Conjunctive Queries with Safe Negation","authors":"V. Felea","doi":"10.1109/DBKDA.2010.42","DOIUrl":"https://doi.org/10.1109/DBKDA.2010.42","url":null,"abstract":"Many queries about real databases havea particular form, e.g., the negated part consists ofone single literal or they contain just a single binaryrelation, etc. For a particular class of queries, it isuseful to construct algorithms for the containmentproblem, that are better than those for the whole classof queries. The paper is about the problem of querycontainment for conjunctive queries with safe negationproperty. A new algorithm to test the containmentproblem of two queries is given. Several aspects ofthe time complexity for the proposed algorithm arespecified. From this point of view, the new algorithmproves to be better than the previous for some classesof queries.","PeriodicalId":273177,"journal":{"name":"2010 Second International Conference on Advances in Databases, Knowledge, and Data Applications","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132431994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can Queries Help to Validate Database Design?","authors":"C. Kop","doi":"10.1109/DBKDA.2010.24","DOIUrl":"https://doi.org/10.1109/DBKDA.2010.24","url":null,"abstract":"The design of a conceptual database schema is a critical task. The more methods a conceptual database designer has in order to communicate with the end user, the better it is for the quality of the conceptual schema. This paper focuses on the question: Can queries be used for checking missing concepts in a conceptual database schema? The usefulness of queries for schema checking will be presented in this paper.","PeriodicalId":273177,"journal":{"name":"2010 Second International Conference on Advances in Databases, Knowledge, and Data Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130009387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Archiving with Athamas: A Framework for Optimized Handling of Domain Knowledge","authors":"Eric R. Schendel, A. Mahdy","doi":"10.1109/DBKDA.2010.30","DOIUrl":"https://doi.org/10.1109/DBKDA.2010.30","url":null,"abstract":"Continuous changes in requirements of a multi-tiered project can significantly deteriorate the maintenance process. This is mainly due to incompatibility between application design and newly introduced requirements. This paper presents a potential application framework, Athamas, which provides a scalable way to agilely adapt to changing data requirements. A currently functional initiative of Athamas is to allow domain knowledge generation from application components without users or business processes dictating data handling and integration requirements. This development allows the framework to be used 1) today as a use case alternative to relational databases for archiving domain knowledge into storage containers and 2) in the future for optimally extracting the knowledge from the storage containers. Evaluations of Athamas-based applications are made against applications using MySQL’s MyISAM and ARCHIVE database storage engines for data archival purposes. Athamas using a zlib compression layer significantly reduces the storage size utilization to 3% of MyISAM and 24% of ARCHIVE, along with publishing time improvements by a factor of 4.9 and 2.6 respectively. Using Athamas with bzip2 compression, storage usage is down to 8% of ARCHIVE usage but at a cost of slower performance up to 4.5 times. The paper also discusses future enhancements to Athamas allowing users to efficiently query domain knowledge.","PeriodicalId":273177,"journal":{"name":"2010 Second International Conference on Advances in Databases, Knowledge, and Data Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131424774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interoperable and Easy-to-Use Web Services for the Bioinformatics Community - A Case Study","authors":"M. Åsberg, L. Strömbäck","doi":"10.1109/DBKDA.2010.22","DOIUrl":"https://doi.org/10.1109/DBKDA.2010.22","url":null,"abstract":"In the field of bioinformatics, there exists a large number of web service providers and many competing standards regarding how data should be represented and interfaced. However, these web services are often hard to use for a non-expert programmer and it can be especially hard to see how different services can be used together to create scientific workflows. In this paper we have performed a literature study to identify problems involved in developing interoperable web services for the bioinformatics community and steps taken by other projects to, at least in part, address them. We have also conducted a case study by developing our own bioinformatic web service to further investigate these problems.","PeriodicalId":273177,"journal":{"name":"2010 Second International Conference on Advances in Databases, Knowledge, and Data Applications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122942054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Categorization of News Articles: A Model Based on Discriminative Term Extraction Method","authors":"Abhishek Sanwaliya, K. Shanker, S. Misra","doi":"10.1109/DBKDA.2010.18","DOIUrl":"https://doi.org/10.1109/DBKDA.2010.18","url":null,"abstract":"Abstract—, Categorization techniques have major contribution in building automated system capable to fulfill the needs of decision making tasks for better organization and management of resources. The objective of this research is to assess the relative performance of some well-known classification methods. Among the proposed approaches our discriminative term extraction (DTE) based combined naïve bayes and K-NN (NB-KNN) approach has the advantages of short learning time due to its computational efficiency with comparatively high accuracy. We designed DTE based NB-KNN model for multi-class, single label text categorization. Our experiments suggest that data characteristics have considerable impact on the performance of classification methods. The Results obtained from Reuters-21578 corpus shows that NB-KNN consistently outperforms the single naïve bayes and K-NN classifiers on Precision, Recall and F1 scores. The results of the study suggest designing a classification system in which several classification methods can be combined to increase the reliability, consistency and accuracy of the categorization.","PeriodicalId":273177,"journal":{"name":"2010 Second International Conference on Advances in Databases, Knowledge, and Data Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132211490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding Linked Open Data as a Web-Scale Database","authors":"M. Hausenblas, Marcel Karnstedt","doi":"10.1109/DBKDA.2010.23","DOIUrl":"https://doi.org/10.1109/DBKDA.2010.23","url":null,"abstract":"While Linked Open Data (LOD) has gained much attention in the recent years, requirements and the challenges concerning its usage from a database perspective are lacking. We argue that such a perspective is crucial for increasing acceptance of LOD. In this paper, we compare the characteristics and constraints of relational databases with LOD, trying to understand the latter as a Web-scale database. We propose LOD-specific requirements beyond the established database rules and highlight research challenges, aiming to combine future efforts of the database research community and the Linked Data research community in this area.","PeriodicalId":273177,"journal":{"name":"2010 Second International Conference on Advances in Databases, Knowledge, and Data Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127465108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of Different Solutions for Solving the Optimization Problem of Large Join Queries","authors":"D. Petković","doi":"10.1109/DBKDA.2010.1","DOIUrl":"https://doi.org/10.1109/DBKDA.2010.1","url":null,"abstract":"The article explores the optimization of queries using genetic algorithms and compares it with the conventional query optimization component. Genetic algorithms (GAs), as a data mining technique, have been shown to be a promising technique in solving the ordering of join operations in large join queries. In practice, a genetic algorithm has been implemented in the PostgreSQL database system. Using this implementation, we compare the conventional component for an exhaustive search with the corresponding module based on a genetic algorithm. Our results show that the use of a genetic algorithm is a viable solution for optimization of large join queries, i.e., that the use of such a module outperforms the conventional query optimization component for queries with more than 12 join operations","PeriodicalId":273177,"journal":{"name":"2010 Second International Conference on Advances in Databases, Knowledge, and Data Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128484697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}