{"title":"A database approach for modeling and querying video data","authors":"C. Decleir, Mohand-Said Hacid, J. Kouloumdjian","doi":"10.1109/ICDE.1999.754892","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754892","url":null,"abstract":"Indexing video data is essential for providing content based access. We consider how database technology can offer an integrated framework for modeling and querying video data. We develop a data model and a rule-based query language for video content based indexing and retrieval. The data model is designed around the object and constraint paradigms. A video sequence is split into a set of fragments. Each fragment can be analyzed to extract the information (i.e., symbolic descriptions) of interest that can be put into a database. This database can then be searched to find information of interest. Two types of information are considered: the entities (i.e., objects) of interest in the domain of a video sequence; video frames which contain these entities. To represent this information, our data model allows facts as well as objects and constraints. We present a declarative, rule-based, constraint query language that can be used to infer relationships about information represented in the model. The language has a clear declarative and operational semantics.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126672146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Index merging","authors":"S. Chaudhuri, Vivek R. Narasayya","doi":"10.1109/ICDE.1999.754945","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754945","url":null,"abstract":"Indexes play a vital role in decision support systems by reducing the cost of answering complex queries. A popular methodology for choosing indexes that is adopted by database administrators as well as by automatic tools is: (a) consider poorly performing queries in the workload; (b) for each query, propose a set of candidate indexes that potentially benefits the query; and (c) choose a subset from the candidate indexes in (b). Unfortunately, such a strategy can result in significant storage and index maintenance costs. In this paper, we present a novel technique, called index merging, to address the above shortcoming. Index merging can take an existing set of indexes (perhaps optimized for individual queries in the workload) and produce a new set of indexes with significantly lower storage and maintenance overheads, while retaining almost all the querying benefits of the initial set of indexes. We present an efficient algorithm for index merging and demonstrate significant savings in index storage and maintenance through experiments on Microsoft SQL Server 7.0.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132817561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiple index structures for efficient retrieval of 2D objects","authors":"C. Shahabi, Maytham Safar, Hezhi Ai","doi":"10.1109/ICDE.1999.754939","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754939","url":null,"abstract":"Many applications require the storage and management of large databases of 2D objects. One of the important functionalities required by all of these applications is the capability to find objects in a database that match a given object. We concentrate on whole matching queries, in which a query object is compared with a set of objects to find the ones that are either exactly identical or similar to the query object. There are two obstacles for efficient execution of whole-match queries. First, the general problem of comparing two 2D objects under rotation, scaling and translation invariance is known to be computationally expensive. Second, the size of the databases are growing, and hence a query should be answered without accessing all the objects in the database. To address both obstacles, we identify a set of six features that could be extracted from the objects' minimum bounding circle (MBC). These are: the radius of the MBC, the coordinates of the center of MBC, the set of touch-points on the MBC, the touch-points angle sequence, the vertex angle sequence and the start-point of the angle sequence. The features are unique per object and can be utilized for both efficiently indexing the objects and expediting the comparison between two objects. We focus on three variations of match queries: an exact shape match, an exact match with rotation, scaling or translation, and similarity shape retrieval.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"393 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117091632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast approximate query answering using precomputed statistics","authors":"V. Poosala, Venkatesh Ganti","doi":"10.1109/ICDE.1999.754932","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754932","url":null,"abstract":"Summary form only given. The last few years have witnessed a significant increase in the use of databases for complex data analysis (OLAP) applications. These applications often require very quick responses from the DBMS. However, they also involve complex queries on large volumes of data. Despite significant improvement in database support for OLAP over the last few years, most DBMSs still fall short of providing quick enough responses. We present a novel solution to this problem: we use small amounts of precomputed summary statistics of the data to answer the queries quickly, albeit approximately. Our hypothesis is that many OLAP applications can tolerate approximations in query results in return for huge response time reductions. The work is part of our efforts to build an efficient data analysis system called AQUA. We describe some of the technical problems addressed in this effort.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129190836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jose Alvin G. Gendrano, Bruce C. Huang, Jim M. Rodrigue, Bongki Moon, R. Snodgrass
{"title":"Parallel algorithms for computing temporal aggregates","authors":"Jose Alvin G. Gendrano, Bruce C. Huang, Jim M. Rodrigue, Bongki Moon, R. Snodgrass","doi":"10.1109/ICDE.1999.754958","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754958","url":null,"abstract":"The ability to model the temporal dimension is essential to many applications. Furthermore, the rate of increase in database size and response time requirements has out-paced advancements in processor and mass storage technology, leading to the need for parallel temporal database management systems. In this paper, we introduce a variety of parallel temporal aggregation algorithms for a shared-nothing architecture based on the sequential \"aggregation tree algorithm\". Via an empirical study, we found that the number of processing nodes, the partitioning of the data, the placement of results and the degree of data reduction effected by the aggregation impacted on the performance of the algorithms. For distributed results placement, we discovered that time-division merging was the obvious choice. For centralized results and high data reduction, pairwise merging was preferred, regardless of the number of processing nodes, but for low data reduction, it only performed well up to 32 nodes. This led us to a centralized variant of time-division merging which was best for larger configurations having low data reduction.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130043621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amy J. Lee, A. Koeller, A. Nica, Elke A. Rundensteiner
{"title":"Data warehouses evolution: trade-offs between quality and cost of query rewritings","authors":"Amy J. Lee, A. Koeller, A. Nica, Elke A. Rundensteiner","doi":"10.1109/ICDE.1999.754935","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754935","url":null,"abstract":"Query rewriting with relaxed semantics has been proposed as a means of retaining the validity of a data warehouse (i.e., materialized queries) in a changing environment. Attributes in the query interface can be classified as essential or dispensable (if it cannot be retained) according to the query definer's preferences. Similarly, preferences for query extent can be specified, for example, to indicate whether a subset of the original result is acceptable or not. The paper discusses the trade-off between quality and cost of query rewriting.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134450430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter Muth, Jeanine Weißenfels, M. Gillmann, G. Weikum
{"title":"Integrating light-weight workflow management systems within existing business environments","authors":"Peter Muth, Jeanine Weißenfels, M. Gillmann, G. Weikum","doi":"10.1109/ICDE.1999.754944","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754944","url":null,"abstract":"Workflow management systems (WfMSs) support the efficient, largely automated execution of business processes. However, using a WfMS typically requires implementing the application's control flow exclusively by the WfMS. This approach is powerful if the control flow is specified and implemented from scratch, but it has severe drawbacks if a WfMS is to be integrated within environments with existing solutions for implementing control flow. Usually, the existing solutions are too complex to be substituted by the WfMS all at once. Hence, the WfMS must support an incremental integration, i.e. the reuse of existing implementations of control flow as well as their incremental substitution. Extending the WfMS's functionality according to future application needs, e.g. by worklist and history management, must also be possible. In particular, at the beginning of an incremental integration process, only a limited amount of a WfMS's functionality is actually exploited by the workflow application. Later on, as the integration proceeds, more advanced requirements arise and demand the customization of the WfMS to the evolving application needs. In this paper, we present the architecture and implementation of a light-weight WfMS, coined Mentor-lite, which aims to overcome the above-mentioned shortcomings of conventional WfMSs. Mentor-lite supports an easy integration of workflow functionality into an existing environment, and can be tailored to specific workflow application needs.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"08 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130933097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiversion reconciliation for mobile databases","authors":"S. Phatak, B. R. Badrinath","doi":"10.1109/ICDE.1999.754974","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754974","url":null,"abstract":"As mobile computing devices become more and more popular mobile databases have started gaining popularity. An important feature of these database systems is their ability to allow optimistic replication of data by permitting disconnected mobile devices to perform local updates on replicated data. The fundamental problem in this approach is the reconciliation problem, i.e. the problem of serializing potentially conflicting updates performed by local transactions on disconnected clients on all copies of the database. We introduce a new algorithm that combines multiversion concurrency control schemes on a server with reconciliation of updates from disconnected clients. The scheme generalizes to multiversion systems, the single version optimistic method of reconciliation, in which client transactions are allowed to commit on the server iff data items in their read sets are not updated on the server after replication.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133663154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"STING+: an approach to active spatial data mining","authors":"Wei Wang, Jiong Yang, R. Muntz","doi":"10.1109/ICDE.1999.754914","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754914","url":null,"abstract":"Spatial data mining presents new challenges due to the large size of spatial data, the complexity of spatial data types, and the special nature of spatial access methods. Most research in this area has focused on efficient query processing of static data. This paper introduces an active spatial data mining approach which extends the current spatial data mining algorithms to efficiently support user-defined triggers on dynamically evolving spatial data. To exploit the locality of the effect of an update and the nature of spatial data, we employ a hierarchical structure with associated statistical information at the various levels of the hierarchy and decompose the user-defined trigger into a set of sub-triggers associated with cells in the hierarchy. Updates are suspended in the hierarchy until their cumulative effect might cause the trigger to fire. It is shown that this approach achieves three orders of magnitude improvement over the naive approach that re-evaluates the condition over the database for each update, while both approaches produce the same result without any delay. Moreover this scheme can support incremental query processing as well.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132748491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Laurent, Jens Lechtenbörger, N. Spyratos, G. Vossen
{"title":"Complements for data warehouses","authors":"D. Laurent, Jens Lechtenbörger, N. Spyratos, G. Vossen","doi":"10.1109/ICDE.1999.754965","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754965","url":null,"abstract":"Views over databases have recently regained attention in the context of data warehouses, which are seen as materialized views. In this setting, efficient view maintenance is an important issue, for which the notion of self-maintainability has been identified as desirable. We extend self-maintainability to (query and update) independence, and we establish an intuitively appealing connection between warehouse independence and view complements. Moreover, we study minimal complements and show how to compute them in the presence of key constraints and inclusion dependencies in the underlying databases. Taking advantage of these complements, an algorithm is outlined for the specification of independent warehouses.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127823297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}