Dan E. Tamir, Ravi Ananthakrishnan, Abraham Kandel
{"title":"A visual debugger for pure Prolog","authors":"Dan E. Tamir, Ravi Ananthakrishnan, Abraham Kandel","doi":"10.1016/1069-0115(94)00048-7","DOIUrl":"10.1016/1069-0115(94)00048-7","url":null,"abstract":"<div><p>This work involves the design and coding of an interpreter for pure Prolog and building a visual debugger for it. Most of the available Prolog interpreters contain some tracing facilities. They do not incorporate, however, a comprehensive visual debugger. The interpreter performs the operations of parsing, unification, resolution, and search in a state-space representation of the Prolog program. The visual debugger incorporates the graphical visualization and the manipulation of the SLD resolution tree. The user visualizes the execution of a pure Prolog program and interacts with the program inside a windowing environment. The program execution may be viewed without interruption or the execution can be stopped at any moment in time. At this point the “snapshot” can be scrutinized with the help of break-points and data displays. This software aims itself to those who wish to observe the actual process of predicate unification, substitution, resolution and goal matching in a Prolog program and to visually interact with the interpreter using a highly friendly and pleasing user interface. An advanced feature, referred to as debugging on the tree, provides the user with the ability to insert break-points directly on the SLD tree, to choose the path of execution, and change the search mode. The resultant search algorithm can be a mixture of depth-first and breadth-first search, avoiding infinite search paths.</p></div>","PeriodicalId":100668,"journal":{"name":"Information Sciences - Applications","volume":"3 2","pages":"Pages 127-147"},"PeriodicalIF":0.0,"publicationDate":"1995-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/1069-0115(94)00048-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86630752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Highly available and reliable communication protocols for heterogeneous systems","authors":"Omran Bukhres, Eva Kühn","doi":"10.1016/1069-0115(95)82002-7","DOIUrl":"https://doi.org/10.1016/1069-0115(95)82002-7","url":null,"abstract":"<div><p>Existing and legacy software systems are the product of lengthy and individual development histories. Interoperability among such systems offers the support of global applications on these systems. However, interoperability among these heterogeneous systems is hampered by the absence of a reliable communication environment that supports the development of global applications.</p><p>In this paper, we show how a generic communication framework can serve as a testbed for the specification, verification, and execution of distributed communication protocols. The development of distributed, global concurrency protocols is much simpler than using traditional tools, like RPC (remote procedure call), because our framework provides a high-level communication mechanism that frees the protocol designer from thinking in a message-based style. We present several protocols that are consistent with realistic assumptions about local database systems, and proofs of their correctness and consistency preservation. We also show that the execution of these protocols is fault-tolerant. The distribution of systems can be chosen according to application requirements, without adaptation of protocols. Fault tolerance can be fine-tuned within the framework itself, so that verified protocols do not need modifications in this case either.</p><p>Besides protocols for traditional transaction processing, we present communication protocols for advanced transaction models that relax one or more of the ACID properties of transactions. These advanced transaction models enable local autonomy and thus are much better suited for heterogeneous environments.</p></div>","PeriodicalId":100668,"journal":{"name":"Information Sciences - Applications","volume":"3 1","pages":"Pages 1-40"},"PeriodicalIF":0.0,"publicationDate":"1995-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/1069-0115(95)82002-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92062959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A fuzzy expert system approach for real-time monitoring of endemic diseases","authors":"Apurba Banerjee, A. Majumder, A. Basu","doi":"10.1016/0020-0255(94)00009-Z","DOIUrl":"https://doi.org/10.1016/0020-0255(94)00009-Z","url":null,"abstract":"","PeriodicalId":100668,"journal":{"name":"Information Sciences - Applications","volume":"34 1","pages":"41-53"},"PeriodicalIF":0.0,"publicationDate":"1995-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88051161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A fuzzy expert system approach for real-time monitoring of endemic diseases","authors":"Apurba Banerjee, Arun Kumar Majumder, Anupam Basu","doi":"10.1016/1069-0115(95)82003-5","DOIUrl":"https://doi.org/10.1016/1069-0115(95)82003-5","url":null,"abstract":"<div><p>In this paper, an architecture of LEPDIAG—a knowledge based system for on-line diagnosis and for monitoring prognosis of leprosy—is presented. The important features of LEPDIAG that have been detailed are a multiple expert environment, a homeostatic expert containing the model of immune reaction, a performance evaluator that can compare the observed signs and symptoms with those predicted by homeostatic expert, and a prognostic expert which optimizes the management schedule for the patients. The entire system is built around a fuzzy expert system building tool FRUIT to deal with the imprecise knowledge.</p></div>","PeriodicalId":100668,"journal":{"name":"Information Sciences - Applications","volume":"3 1","pages":"Pages 41-53"},"PeriodicalIF":0.0,"publicationDate":"1995-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/1069-0115(95)82003-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92121534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rate distortion theory and the volatility of asset prices","authors":"W. D. O'Neill","doi":"10.1016/0020-0255(94)00040-I","DOIUrl":"https://doi.org/10.1016/0020-0255(94)00040-I","url":null,"abstract":"","PeriodicalId":100668,"journal":{"name":"Information Sciences - Applications","volume":"33 1","pages":"55-74"},"PeriodicalIF":0.0,"publicationDate":"1995-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76227038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Highly available and reliable communication protocols for heterogeneous systems","authors":"O. Bukhres, E. Kühn","doi":"10.1016/0020-0255(94)00005-V","DOIUrl":"https://doi.org/10.1016/0020-0255(94)00005-V","url":null,"abstract":"","PeriodicalId":100668,"journal":{"name":"Information Sciences - Applications","volume":"5 1","pages":"1-40"},"PeriodicalIF":0.0,"publicationDate":"1995-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78746973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rate distortion theory and the volatility of asset prices","authors":"William D. O'Neill","doi":"10.1016/1069-0115(95)82004-3","DOIUrl":"https://doi.org/10.1016/1069-0115(95)82004-3","url":null,"abstract":"<div><p>A market with many traders clears at a single price. The intuition that such a market can be modeled as a multiple access channel with many source inputs and a single output is confirmed and supported by single stock trading data. If all stocks in an exchange with a price index are so traded, then the Capital Asset Pricing Model (CAPM) of the index should find the index a prediction of future returns to holding a portfolio of the indexed stocks. This inference is shown to be true by applying Shannon's rate distortion theorem to the CAPM, and thereby ending the decade-old controversy that the CAPM cannot explain stock and bond price volatility. Data from the NYSE are found to be compatible with the CAPM rate distortion function. In contrast to communication system channels, market channels cannot operate at channel capacity, but rather at a mutual information rate demanded by trader profit taking. This rate compared to the trader information source rate distortion function determines the minimum price distortion that can be expected of markets.</p></div>","PeriodicalId":100668,"journal":{"name":"Information Sciences - Applications","volume":"3 1","pages":"Pages 55-74"},"PeriodicalIF":0.0,"publicationDate":"1995-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/1069-0115(95)82004-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91983747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A fuzzy model for unsupervised character classification","authors":"Shy-Shyan Chen, Frank Y. Shih, Peter A. Ng","doi":"10.1016/1069-0115(94)90034-5","DOIUrl":"10.1016/1069-0115(94)90034-5","url":null,"abstract":"<div><p>This paper presents a fuzzy logic approach to efficiently perform unsupervised character classification for improvement in robustness, correctness, and speed of a character recognition system. The characters are first split into seven typographical categories. The classification scheme uses pattern matching to classify the characters in each category into a set of fuzzy prototypes based on a nonlinear weighted similarity function. The fuzzy unsupervised character classification, which is natural in the representation of prototypes for character matching, is developed and a weighted fuzzy similarity measure is explored. The characteristics of the fuzzy model are discussed and used in speeding up the classification process. After classification, the character recognition which is simply applied on a smaller set of the fuzzy prototypes, becomes much easier and less time-consuming.</p></div>","PeriodicalId":100668,"journal":{"name":"Information Sciences - Applications","volume":"2 3","pages":"Pages 143-165"},"PeriodicalIF":0.0,"publicationDate":"1994-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/1069-0115(94)90034-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86278371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An extended fuzzy linguistic approach to generalize boolean information retrieval","authors":"Donald H. Kraft, Gloria Bordogna, Gabriella Pasi","doi":"10.1016/1069-0115(94)90032-9","DOIUrl":"10.1016/1069-0115(94)90032-9","url":null,"abstract":"<div><p>The generalization of Boolean information retrieval systems is still of interest to scholars. In spite of the fact that commercial systems use Boolean retrieval mechanisms, such systems still have some limitations. One of the main problems is that such systems lack the ability to deal well with imprecision and subjectivity. Previous efforts have led to the introduction of numeric weights to improve both document representations (term weights) and query languages (query weights). However, the use of weights requires a clear knowledge of the semantics of the query in order to translate a fuzzy concept into a precise numeric value. Moreover, it is difficult to model the matching of queries to documents in a way that will preserve the semantics of user queries.</p><p>A linguistic extension has been generated, starting from an existing Boolean weighted retrieval model and formalized within fuzzy set theory, in which numeric query weights are replaced by linguistic descriptors that specify the degree of importance of the terms.</p><p>In the past, query weights were seen as measures of the importance of a specific term in representing the query or as a threshold to aid in matching a specific document to the query. The linguistic extension was originally modeled to view the query weights as a description of the ideal document, so that deviations would be rejected whether a given document had term weights that were too high or too low. This paper looks at an extension to the linguistic model that is not symmetric in that documents with a term weight below the query weight are treated differently than documents with a term weight above the query weight.</p></div>","PeriodicalId":100668,"journal":{"name":"Information Sciences - Applications","volume":"2 3","pages":"Pages 119-134"},"PeriodicalIF":0.0,"publicationDate":"1994-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/1069-0115(94)90032-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89449595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An optimization of the fuzzy control algorithm","authors":"Eric Bolinger","doi":"10.1016/1069-0115(94)90033-7","DOIUrl":"10.1016/1069-0115(94)90033-7","url":null,"abstract":"<div><p>Much effort has been put into the optimization of fuzzy controllers. This is usually done by implementing the fuzzy inference engine in hardware. This paper presents an optimized fuzzy controller using a software-only approach. Increased speed is achieved by placing constraints on certain parts of the design. The effect of these constraints are discussed. The result is analogous to the fast Fourier transform (FFT). Although the FFT is a restricted form of the general transform, it is optimized for speed over generality. The same relationship holds true for the optimized fuzzy controller. The purpose of this work is to optimize the fuzzy control algorithm to achieve hardware-like performance in software. This can be done by keeping only those features that are necessary and sufficient to achieve the same results. Everything else can be eliminated.</p></div>","PeriodicalId":100668,"journal":{"name":"Information Sciences - Applications","volume":"2 3","pages":"Pages 135-142"},"PeriodicalIF":0.0,"publicationDate":"1994-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/1069-0115(94)90033-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87174016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}