{"title":"Declarative and procedural object-oriented views","authors":"R. Busse, Péter Fankhauser","doi":"10.1109/ICDE.1999.754940","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754940","url":null,"abstract":"One major approach to realise database integration is to adapt and merge the database schemas by defining views. When integrating object-oriented databases, the views need to adequately support object identity and methods. View objects need to be identified on the basis of the objects they have been derived from. Methods must be callable from the query processor without impeding query optimisation. Our view system for ODMG-93 supports both declarative and procedural integration of object-oriented databases. It provides flexible integration semantics without sacrificing the optimisation potential.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123626103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time data access control on B-tree index structures","authors":"Tei-Wei Kuo, Chih-Hung Wei, K. Lam","doi":"10.1109/ICDE.1999.754962","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754962","url":null,"abstract":"The paper proposes methodologies to control the access of B-tree-indexed data in a batch and real time fashion. Algorithms are proposed to insert, query, delete, and rebalance B-tree-indexed data based on non real time algorithms (P.M. Kerttu et al., 1996) and the idea of priority inheritance (L. Sha et al., 1990). We propose methodologies to reduce the number of disk I/Os to improve the system performance without introducing more priority inversion. The performance of our methodologies was evaluated by a series of experiments, for which we have some encouraging results.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130333886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An agent-based approach to extending the native active capability of relational database systems","authors":"Lijuan Li, Sharma Chakravarthy","doi":"10.1109/ICDE.1999.754954","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754954","url":null,"abstract":"Event-condition-action (or ECA) rules are used to capture active capability. While a number of research prototypes of active database systems have been built, ECA rule capability in relational DBMSs is still very limited. We address the problem of turning a traditional database management system into a full-fledged active database system without changing the underlying system. The advantages of this approach are: transparency; ability to and active capability without changing the client programs; retain relational DBMS's underlying functionality; and persistence of ECA rules using the native database functionality. We describe how complete active database semantics can be supported on an existing SQL server (Sybase, in our case) by adding a mediator, termed ECA Agent, between the SQL server and the clients. ECA rules are fully supported through the ECA Agent without changing applications or the SQL server. Composite events are detected in the ECA Agent and actions are invoked in the SQL server. Events are persisted in the native database system. ECA Agent is designed to connect to SQL server by using Sybase connectivity products. The architecture, design, and implementation details are presented.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116541541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving the access time performance of serpentine tape drives","authors":"O. Sandstå, Roger Midtstraum","doi":"10.1109/ICDE.1999.754970","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754970","url":null,"abstract":"The paper presents a general model for estimating access times of serpentine tape drives. The model is used to schedule I/O requests in order to minimize the total access time. We propose a new scheduling algorithm, Multi-Pass Scan Star (MPScan*), which makes good utilization of the streaming capability of the tape drive and avoids the pitfalls of naive multi-pass scan algorithms and greedy algorithms like Shortest Locate Time First. The performance of several scheduling algorithms have been simulated for problem sizes up to 2048 concurrent I/O requests. For scheduling of two to 1000 I/O requests, MPScan* gives equal or better results than any other algorithm, and provides up to 85 percent reduction of the total access time. All results have been validated by extensive experiments on Tandberg MLRI and Quantum DLT2000 drives.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133604686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mining optimized support rules for numeric attributes","authors":"R. Rastogi, Kyuseok Shim","doi":"10.1109/ICDE.1999.754926","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754926","url":null,"abstract":"Generalizes the optimized support association rule problem by permitting rules to contain disjunctions over uninstantiated numeric attributes. For rules containing a single numeric attribute, we present a dynamic programming algorithm for computing optimized association rules. Furthermore, we propose a bucketing technique for reducing the input size, and a divide-and-conquer strategy that improves the performance significantly without sacrificing optimality. Our experimental results for a single numeric attribute indicate that our bucketing and divide-and-conquer enhancements are very effective in reducing the execution times and memory requirements of our dynamic programming algorithm. Furthermore, they show that our algorithms scale up almost linearly with the attribute's domain size as well as with the number of disjunctions.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128520783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Bohannon, R. Rastogi, S. Seshadri, A. Silberschatz, S. Sudarshan
{"title":"Using codewords to protect database data from a class of software errors","authors":"P. Bohannon, R. Rastogi, S. Seshadri, A. Silberschatz, S. Sudarshan","doi":"10.1109/ICDE.1999.754943","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754943","url":null,"abstract":"Increasingly, for extensibility and performance, special-purpose application code is being integrated with database system code. Such application code has direct access to database system buffers and, as a result, the danger of data being corrupted due to inadvertent application writes is increased. Previously proposed hardware techniques to protect data from corruption required system calls, and their performance depended on the details of the hardware architecture. We investigate an alternative approach which uses codewords associated with regions of data to detect corruption and to prevent corrupted data from being used by subsequent transactions. We develop several such techniques which vary in the level of protection, space overhead, performance and impact on concurrency. These techniques are implemented in the Dali/spl acute/ main-memory storage manager, and the performance impact of each on normal processing is evaluated. Novel techniques are developed to recover when a transaction has read corrupted data caused by a bad write, and then gone on to write other data in the database. These techniques use limited and relatively low-cost logging of transaction reads to trace the corruption, and may also prove useful when resolving problems caused by incorrect data entry and other logical errors.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133576303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing semistructured data mediators with document type definitions","authors":"Y. Papakonstantinou, P. Velikhov","doi":"10.1109/ICDE.1999.754916","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754916","url":null,"abstract":"Mediation is an important application of XML. The MIX mediator uses Document Type Definitions (DTDs) to assist the user in query formulation and query processors in running queries more efficiently. We provide an algorithm for inferring the view DTD from the view definition and the source DTDs. We develop a metric of the quality of the inference algorithm's view DTD by formalizing the notions of soundness and tightness. Intuitively, tightness is similar to precision, i.e., it deteriorates when \"many\" objects described by the view DTD can never appear as content of the view. In addition we show that DTDs have some inherent deficiencies that prevent the development of tight DTDs. We propose \"DTDs with specialization\" as a way to resolve this problem.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125853735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Maintaining data cubes under dimension updates","authors":"Carlos A. Hurtado, A. Mendelzon, A. Vaisman","doi":"10.1109/ICDE.1999.754950","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754950","url":null,"abstract":"OLAP systems support data analysis through a multidimensional data model, according to which data facts are viewed as points in a space of application-related \"dimensions\", organized into levels which conform to a hierarchy. The usual assumption is that the data points reflect the dynamic aspect of the data warehouse, while dimensions are relatively static. However, in practice, dimension updates are often necessary to adapt the multidimensional database to changing requirements. Structural updates can also take place, like addition of categories or modification of the hierarchical structure. When these updates are performed, the materialized aggregate views that are typically stored in OLAP systems must be efficiently maintained. These updates are poorly supported (or not supported at all) in current commercial systems, and have received little attention in the research literature. We present a formal model of dimension updates in a multidimensional model, a collection of primitive operators to perform them, and a study of the effect of these updates on a class of materialized views, giving an algorithm to efficiently maintain them.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121853307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using XML in relational database applications","authors":"S. Malaika","doi":"10.1109/ICDE.1999.754920","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754920","url":null,"abstract":"The author reviews relational database XML features and describes their use in database applications. Aspects that will be considered include the creation, validation, transformation, storage and retrieval of XML documents, the inclusion of existing and new relational data in XML documents, and the impact of XML Links.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130076817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The hybrid tree: an index structure for high dimensional feature spaces","authors":"K. Chakrabarti, S. Mehrotra","doi":"10.1109/ICDE.1999.754960","DOIUrl":"https://doi.org/10.1109/ICDE.1999.754960","url":null,"abstract":"Feature-based similarity searching is emerging as an important search paradigm in database systems. The technique used is to map the data items as points into a high-dimensional feature space which is indexed using a multidimensional data structure. Similarity searching then corresponds to a range search over the data structure. Although several data structures have been proposed for feature indexing, none of them is known to scale beyond 10-15 dimensional spaces. This paper introduces the hybrid tree-a multidimensional data structure for indexing high-dimensional feature spaces. Unlike other multidimensional data structures, the hybrid tree cannot be classified as either a pure data partitioning (DP) index structure (such as the R-tree, SS-tree or SR-tree) or a pure space partitioning (SP) one (such as the KDB-tree or hB-tree); rather it combines the positive aspects of the two types of index structures into a single data structure to achieve a search performance which is more scalable to high dimensionalities than either of the above techniques. Furthermore, unlike many data structures (e.g. distance-based index structures like the SS-tree and SR-tree), the hybrid tree can support queries based on arbitrary distance functions. Our experiments on \"real\" high-dimensional large-size feature databases demonstrate that the hybrid tree scales well to high dimensionality and large database sizes. It significantly outperforms both purely DP-based and SP-based index mechanisms as well as linear scans at all dimensionalities for large-sized databases.","PeriodicalId":236128,"journal":{"name":"Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129174873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}