M. Arya, William F. Cody, Christos Faloutsos, Joel E. Richardson, Arthur Toya
{"title":"QBISM: extending a DBMS to support 3D medical images","authors":"M. Arya, William F. Cody, Christos Faloutsos, Joel E. Richardson, Arthur Toya","doi":"10.1109/ICDE.1994.283046","DOIUrl":"https://doi.org/10.1109/ICDE.1994.283046","url":null,"abstract":"Describes the design and implementation of QBlSM (Query By Interactive, Spatial Multimedia), a prototype for querying and visualizing 3D spatial data. The first application is in an area in medical research, in particular, Functional Brain Mapping. The system is built on top of the Starburst DBMS extended to handle spatial data types, specifically, scalar fields and arbitrary regions of space within such fields. The authors list the requirements of the application, discuss the logical and physical database design issues, and present timing results from their prototype. They observed that the DBMS' early spatial filtering results in significant performance savings because the system response time is dominated by the amount of data retrieved, transmitted, and rendered.<<ETX>>","PeriodicalId":142465,"journal":{"name":"Proceedings of 1994 IEEE 10th International Conference on Data Engineering","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115490663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Approximate analysis of real-time database systems","authors":"J. Haritsa","doi":"10.1109/ICDE.1994.283008","DOIUrl":"https://doi.org/10.1109/ICDE.1994.283008","url":null,"abstract":"During the past few years, several studies have been made on the performance of real-time database systems with respect to the number of transactions that miss their deadlines. These studies have used either simulation models or database testbeds as their performance evaluation tools. We present a preliminary analytical performance study of real-time transaction processing. Using a series of approximations, we derive simple closed-form solutions to reduced real-time database models. Although quantitatively approximate, the solutions accurately capture system sensitivity to workload parameters and indicate conditions under which performance bounds are achieved.<<ETX>>","PeriodicalId":142465,"journal":{"name":"Proceedings of 1994 IEEE 10th International Conference on Data Engineering","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124910524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supporting partial data accesses to replicated data","authors":"P. Triantafillou, Feng Xiao","doi":"10.1109/ICDE.1994.283006","DOIUrl":"https://doi.org/10.1109/ICDE.1994.283006","url":null,"abstract":"Partial data access operations occur frequently in distributed systems. This paper presents new approaches for efficiently supporting partial data access operations to replicated data. We propose the replica modularization (RM) technique which suggests partitioning replicas into modules, which now become the minimum unit of data access. RM is shown to increase the availability of both partial read and write operations and improves performance by reducing access delays and the size of data transfers occurring during operation execution on replicated data. In addition, we develop a new module-based protocol (MB) in which different replication protocols are used to access different sets of replicas, with each replica storing different modules. The instance of MB we discuss here is a hybrid of the ROWA (Read One Write All) protocol and the MQ (Majority Quorum) protocol. MB allows a trade-off between storage costs and availability. We show that MB can achieve almost as high availability as the MQ protocol, but with considerably smaller storage costs.<<ETX>>","PeriodicalId":142465,"journal":{"name":"Proceedings of 1994 IEEE 10th International Conference on Data Engineering","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116458878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Query optimization strategies for browsing sessions","authors":"M. Kersten, M.F.N. deBoer","doi":"10.1109/ICDE.1994.283072","DOIUrl":"https://doi.org/10.1109/ICDE.1994.283072","url":null,"abstract":"This paper describes techniques and experimental results to obtain response time improvement for a browsing session, i.e. a sequence of interrelated queries to locate a subset of interest. The optimization technique exploits symbolic analysis of the query interdependencies and retention of (partial) query answers. A prototype browsing session optimizer (BSO) has been constructed that runs as a front-end to the Ingres relational system. Based on the experiments reported, we propose to extend (existing) DBMSs with a mechanism to keep and reuse small answers by default. Such investments quickly pay off in sessions with interrelated queries.<<ETX>>","PeriodicalId":142465,"journal":{"name":"Proceedings of 1994 IEEE 10th International Conference on Data Engineering","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122428484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Knowledge-based handling of design expertise","authors":"P. Morizet-Mahoudeaux, Einoshin Suzuki, S. Ohsuga","doi":"10.1109/ICDE.1994.283053","DOIUrl":"https://doi.org/10.1109/ICDE.1994.283053","url":null,"abstract":"Research issues in the domain of AI for design can be organized in three categories: decision making, representation and knowledge handling. In the area of knowledge handling, this paper addresses issues concerning the management of design experience to guide a priori the generation of candidate solutions. The approach is based on keeping the trace of a previous design experience as a hierarchical knowledge base. A level in the hierarchy can be viewed as a level of granularity of the description of the design process. A general framework for defining a partial order function between the granularity levels in the knowledge bases of design expertise is proposed. It is then possible to compute the sets of the elements belonging to smaller granularity levels, which are linked to any component of the hierarchy. Thus, it makes it possible to compute the level in the hierarchy that can be reused without modification for the design of a new product. Computation of the appropriate level is mainly based on matching the data corresponding to the new requirements with these sets. The approach has been tested by using a multiple expert systems structure based on using interactively two systems, an expert system development tool for design, KAUS, and an expert system development tool for diagnosing engineering processes, SUPER. The intrinsic properties of SUPER have also been used for improving the design procedure when qualitative and quantitative knowledge is involved.<<ETX>>","PeriodicalId":142465,"journal":{"name":"Proceedings of 1994 IEEE 10th International Conference on Data Engineering","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121897231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Celma, Carlos García, L. Mota-Herranz, H. Decker
{"title":"Comparing and synthesizing integrity checking methods for deductive databases","authors":"M. Celma, Carlos García, L. Mota-Herranz, H. Decker","doi":"10.1109/ICDE.1994.283033","DOIUrl":"https://doi.org/10.1109/ICDE.1994.283033","url":null,"abstract":"We compare and synthesize different methods for integrity checking in deductive databases. First, we state simplified integrity checking for deductive databases independently of the particular strategy used by different methods found in the literature. In accordance with this statement, we classify integrity checking methods into two main groups: methods with a generation phase without fact access and methods with a generation phase with fact access. Then, we propose an implementation scheme (a metaprogram) where the differences and similarities among the methods can be pointed out. In this common implementation framework, we compare the methods; this comparison is based on the number of facts accessed by each of them during integrity checking. Finally and from the analysis of the results, we define a convergence method which synthesizes some different features from several methods.<<ETX>>","PeriodicalId":142465,"journal":{"name":"Proceedings of 1994 IEEE 10th International Conference on Data Engineering","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129481425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of reorganization overhead in log-structured file systems","authors":"J. T. Robinson, P. Franaszek","doi":"10.1109/ICDE.1994.283000","DOIUrl":"https://doi.org/10.1109/ICDE.1994.283000","url":null,"abstract":"In a log-structured file system (LFS), in general each block written to disk causes another disk block to become invalid data, resulting in one block of free space. Over time free disk space becomes highly fragmented, and a high level of dynamic reorganization may be required to coalesce free blocks into physically contiguous areas that subsequently can be used for logs. By consuming available disk bandwidth, this reorganization can degrade system performance. In a segmented disk LFS organization, the copy-and-compact reorganization method reads entire segments and then writes back all valid blocks. Other methods, suggested by earlier work on reduction of storage fragmentation for non-LFS disks, may access far fewer blocks (at the cost of increased CPU time). An analytic model is used to evaluate the effects on available disk bandwidth of dynamic reorganization, as a function of the read/write ratio, storage utilization, and degree of data movement required by dynamic reorganization for steady-state operation. It is shown that decreasing reorganization overhead can have dramatic effects on available disk bandwidth.<<ETX>>","PeriodicalId":142465,"journal":{"name":"Proceedings of 1994 IEEE 10th International Conference on Data Engineering","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124551736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantics-based multilevel transaction management in federated systems","authors":"A. Deacon, H. Schek, G. Weikum","doi":"10.1109/ICDE.1994.283066","DOIUrl":"https://doi.org/10.1109/ICDE.1994.283066","url":null,"abstract":"A federated database management system (FDBMS) is a special type of distributed database system that enables existing local databases, in a heterogeneous environment, to maintain a high degree of autonomy. One of the key problems in this setting is the coexistence of local transactions and global transactions, where the latter access and manipulate data of multiple local databases. In modeling FDBMS transaction executions the authors propose a more realistic model than the traditional read/write model; in their model a local database exports high-level operations which are the only operations distributed global transactions can execute to access data in the shared local databases. Such restrictions are not unusual in practice as, for example, no airline or bank would ever permit foreign users to execute ad hoc queries against their databases for fear of compromising autonomy. The proposed architecture can be elegantly modeled using the multilevel nested transaction model for which a sound theoretical foundation exists to prove concurrent executions correct. A multilevel scheduler that is able to exploit the semantics of exported operations can significantly increase concurrency by ignoring pseudo conflicts. A practical scheduling mechanism for FDBMSs is described that offers the potential for greater performance and more flexibility than previous approaches based on the read/write model.<<ETX>>","PeriodicalId":142465,"journal":{"name":"Proceedings of 1994 IEEE 10th International Conference on Data Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129837678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Active databases for active repositories","authors":"H. Jasper","doi":"10.1109/ICDE.1994.283054","DOIUrl":"https://doi.org/10.1109/ICDE.1994.283054","url":null,"abstract":"The various activities necessary for constructing a software product are described by software process models. Many of the actions mentioned there are supported by tools that use a repository in order to create, manipulate, generate, etc. the deliverables. The process is tailored for each project to necessary work and planned with respect to existing resources. This results in a schedule for each project that is manually compared with ongoing work. We introduce the idea of active repositories that partially automate scheduling and controlling of the activities described within a process model. The notion of active repositories is based on active database technology that allows for detecting events and triggering the corresponding actions. Events are state changes in the repository or raised by external components, e.g. a clock or CASE tool. Actions manipulate the repository, trigger CASE tools, signal external systems or notify the user.<<ETX>>","PeriodicalId":142465,"journal":{"name":"Proceedings of 1994 IEEE 10th International Conference on Data Engineering","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114766626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supporting high-bandwidth navigation in object-bases","authors":"V. Vasudevan","doi":"10.1109/ICDE.1994.283044","DOIUrl":"https://doi.org/10.1109/ICDE.1994.283044","url":null,"abstract":"Magritte is an attempt to construct a high-bandwidth front-end to an object-base containing meta-data about SCAD designs. SCAD is a small part of a family of visualization applications where the end-user concurrently manipulates large collections of active data. Such end-user interfaces require a different paradigm of interaction than the object-at-a-time interfaces of current databases. Proposals here can be divided into mechanisms for scene creation and those for scene integration. The former allow a user to create a single scene with ease. The latter help in desktop management by allowing scenes to be combined and correlated. The implementation experience points out a number of shortcomings in current database offerings that need to be solved so as to ease the design of high-bandwidth front-ends.<<ETX>>","PeriodicalId":142465,"journal":{"name":"Proceedings of 1994 IEEE 10th International Conference on Data Engineering","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125725293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}