{"title":"Worst-case groundness analysis using positive Boolean functions","authors":"Michael Codish","doi":"10.1016/S0743-1066(99)00014-X","DOIUrl":"10.1016/S0743-1066(99)00014-X","url":null,"abstract":"<div><p>This note illustrates a theoretical worst-case scenario for groundness analyses obtained through abstract interpretation over the abstract domain of positive Boolean functions. A sequence of programs is given for which any <em>Pos</em>-based abstract interpretation for groundness analysis follows an exponential chain. Another sequence of programs is given for which a state-of-the-art implementation based on ROBDDs gives a result of exponential size in only three iterations. The moral of the story is that a serious <em>Pos</em> analyser must incorporate some form of widening to protect itself from the inherent complexity of the underlying domain.</p></div>","PeriodicalId":101236,"journal":{"name":"The Journal of Logic Programming","volume":"41 1","pages":"Pages 125-128"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0743-1066(99)00014-X","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116789066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generalization by absorption of definite clauses","authors":"Kerry Taylor","doi":"10.1016/S0743-1066(99)00016-3","DOIUrl":"10.1016/S0743-1066(99)00016-3","url":null,"abstract":"<div><p>Absorption is one of the so-called <em>inverse resolution</em> operators of Inductive Logic Programming. The paper studies the properties of absorption that make it suitable for incremental generalization of definite clauses using background knowledge represented by a definite program. The soundness and completeness of the operator are established according to Buntine's model of generalization called generalized subsumption. The completeness argument proceeds by viewing absorption as the inversion of SLD-resolution. In addition, some simplifying techniques are introduced for reducing the non-determinism inherent in usual presentations of absorption. The effect of these simplifications on completeness is discussed.</p></div>","PeriodicalId":101236,"journal":{"name":"The Journal of Logic Programming","volume":"40 2","pages":"Pages 127-157"},"PeriodicalIF":0.0,"publicationDate":"1999-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0743-1066(99)00016-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131548128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Induction of logic programs by example-guided unfolding","authors":"Henrik Boström, Peter Idestam-Almquist","doi":"10.1016/S0743-1066(99)00017-5","DOIUrl":"10.1016/S0743-1066(99)00017-5","url":null,"abstract":"<div><p>Resolution has been used as a specialisation operator in several approaches to top-down induction of logic programs. This operator allows the overly general hypothesis to be used as a declarative bias that restricts not only what predicate symbols can be used in produced hypotheses, but also how the predicates can be invoked. The two main strategies for top-down induction of logic programs, Covering and Divide-and-Conquer, are formalised using resolution as a specialisation operator, resulting in two strategies for performing example-guided unfolding. These strategies are compared both theoretically and experimentally. It is shown that the computational cost grows quadratically in the size of the example set for Covering, while it grows linearly for Divide-and-Conquer. This is also demonstrated by experiments, in which the amount of work performed by Covering is up to 30 times the amount of work performed by Divide-and-Conquer. The theoretical analysis shows that the hypothesis space is larger for Covering, and thus more compact hypotheses may be found by this technique than by Divide-and-Conquer. However, it is shown that for each non-recursive hypothesis that can be produced by Covering, there is an equivalent hypothesis (w.r.t. the background predicates) that can be produced by Divide-and-Conquer. A major draw-back of Divide-and-Conquer, in contrast to Covering, is that it is not applicable to learning recursive definitions.</p></div>","PeriodicalId":101236,"journal":{"name":"The Journal of Logic Programming","volume":"40 2","pages":"Pages 159-183"},"PeriodicalIF":0.0,"publicationDate":"1999-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0743-1066(99)00017-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122626458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing general chain programs","authors":"Anke D. Rieger","doi":"10.1016/S0743-1066(99)00020-5","DOIUrl":"10.1016/S0743-1066(99)00020-5","url":null,"abstract":"<div><p>The goal of knowledge compilation is to transform programs in order to speed up their evaluation. In Inductive Logic Programming, two major approaches to speed-up learning exist: Approaches that intertwine the learning and the optimization process and approaches that separate these two processes. We follow the latter approach and present a new equivalence-preserving transformation method for programs with ordered clauses. It eliminates redundancies that make forward inference procedures slow. We introduce general chain rules, a specific class of ordered clauses, whose syntactical features are exploited in a new forward inference method. The comparison of the time needed by this method to evaluate the transformed program with the time needed by a standard forward inference procedure for the original program confirms the expected speed-up.</p></div>","PeriodicalId":101236,"journal":{"name":"The Journal of Logic Programming","volume":"40 2","pages":"Pages 251-271"},"PeriodicalIF":0.0,"publicationDate":"1999-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0743-1066(99)00020-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130092136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The complexity of revising logic programs","authors":"Russell Greiner","doi":"10.1016/S0743-1066(99)00021-7","DOIUrl":"10.1016/S0743-1066(99)00021-7","url":null,"abstract":"<div><p>A rule-based program will return a set of answers to each query. An <em>impure</em> program, which includes the Prolog <span>cut</span> “!” and “<span>not(</span>·<span>)</span>” operators, can return different answers if its rules are re-ordered. There are also many reasoning systems that return only the <em>first</em> answer found for each query; these first answers, too, depend on the rule order, even in pure rule-based systems. A theory revision algorithm, seeking a revised rule-base whose <em>expected accuracy</em>, over the distribution of queries, is optimal, should therefore consider modifying the order of the rules. This paper first shows that a polynomial number of training “labeled queries” (each a query paired with its correct answer) provides the distribution information necessary to identify the optimal ordering. It then proves, however, that the task of determining which ordering is optimal, once given this distributional information, is intractable even in trivial situations; e.g., even if each query is an atomic literal, we are seeking only a “perfect” theory, and the rule base is propositional. We also prove that this task is not even approximable: Unless P=NP, no polynomial time algorithm can produce an ordering of an <em>n</em>-rule theory whose accuracy is within <em>n</em><sup><em>γ</em></sup> of optimal, for some <em>γ</em>>0. We next prove similar hardness and non-approximatability, results for the related tasks of determining, in these impure contexts, (1) the optimal <em>ordering of the antecedents</em>; (2) the optimal set of <em>new rules to add</em> and (3) the optimal set of <em>existing rules to delete</em>.</p></div>","PeriodicalId":101236,"journal":{"name":"The Journal of Logic Programming","volume":"40 2","pages":"Pages 273-298"},"PeriodicalIF":0.0,"publicationDate":"1999-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0743-1066(99)00021-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134028314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Numerical reasoning with an ILP system capable of lazy evaluation and customised search","authors":"Ashwin Srinivasan , Rui Camacho","doi":"10.1016/S0743-1066(99)00018-7","DOIUrl":"10.1016/S0743-1066(99)00018-7","url":null,"abstract":"<div><p>Using problem-specific background knowledge, computer programs developed within the framework of Inductive Logic Programming (ILP) have been used to construct restricted first-order logic solutions to scientific problems. However, their approach to the analysis of data with substantial numerical content has been largely limited to constructing clauses that: (a) provide qualitative descriptions (“high”, “low” etc.) of the values of response variables; and (b) contain simple inequalities restricting the ranges of predictor variables. This has precluded the application of such techniques to scientific and engineering problems requiring a more sophisticated approach. A number of specialised methods have been suggested to remedy this. In contrast, we have chosen to take advantage of the fact that the existing theoretical framework for ILP places very few restrictions of the nature of the background knowledge. We describe two issues of implementation that make it possible to use background predicates that implement well-established statistical and numerical analysis procedures. Any improvements in analytical sophistication that result are evaluated empirically using artificial and real-life data. Experiments utilising artificial data are concerned with extracting constraints for response variables in the text-book problem of balancing a pole on a cart. They illustrate the use of clausal definitions of arithmetic and trigonometric functions, inequalities, multiple linear regression, and numerical derivatives. A non-trivial problem concerning the prediction of mutagenic activity of nitroaromatic molecules is also examined. In this case, expert chemists have been unable to devise a model for explaining the data. The result demonstrates the combined use by an ILP program of logical and numerical capabilities to achieve an analysis that includes linear modelling, clustering and classification. In all experiments, the predictions obtained compare favourably against benchmarks set by more traditional methods of quantitative methods, namely, regression and neural-network.</p></div>","PeriodicalId":101236,"journal":{"name":"The Journal of Logic Programming","volume":"40 2","pages":"Pages 185-213"},"PeriodicalIF":0.0,"publicationDate":"1999-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0743-1066(99)00018-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132208900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A study of relevance for learning in deductive databases","authors":"Nada Lavrač , Dragan Gamberger , Viktor Jovanoski","doi":"10.1016/S0743-1066(99)00019-9","DOIUrl":"10.1016/S0743-1066(99)00019-9","url":null,"abstract":"<div><p>This paper is a study of the problem of relevance in inductive concept learning. It gives definitions of irrelevant literals and irrelevant examples and presents efficient algorithms that enable their elimination. The proposed approach is directly applicable in propositional learning and in relation learning tasks that can be solved using a LINUS transformation approach. A simple inductive logic programming (ILP) problem is used to illustrate the approach to irrelevant literal and example elimination. Results of utility studies show the usefulness of literal reduction applied in LINUS and in the search of refinement graphs.</p></div>","PeriodicalId":101236,"journal":{"name":"The Journal of Logic Programming","volume":"40 2","pages":"Pages 215-249"},"PeriodicalIF":0.0,"publicationDate":"1999-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0743-1066(99)00019-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131455544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}