Saad Attieh , Nguyen Dang , Christopher Jefferson , Ian Miguel , Peter Nightingale
{"title":"Athanor: Local search over abstract constraint specifications","authors":"Saad Attieh , Nguyen Dang , Christopher Jefferson , Ian Miguel , Peter Nightingale","doi":"10.1016/j.artint.2024.104277","DOIUrl":"10.1016/j.artint.2024.104277","url":null,"abstract":"<div><div>Local search is a common method for solving combinatorial optimisation problems. We focus on general-purpose local search solvers that accept as input a constraint model — a declarative description of a problem consisting of a set of decision variables under a set of constraints. Existing approaches typically take as input models written in solver-independent constraint modelling languages like MiniZinc. The <span>Athanor</span> solver we describe herein differs in that it begins from a specification of a problem in the abstract constraint specification language <span>Essence</span>, which allows problems to be described without commitment to low-level modelling decisions through its support for a rich set of abstract types. The advantage of proceeding from <span>Essence</span> is that the structure apparent in a concise, abstract specification of a problem can be exploited to generate high quality neighbourhoods automatically, avoiding the difficult task of identifying that structure in an equivalent constraint model. Based on the twin benefits of neighbourhoods derived from high level types and the scalability derived by searching directly over those types, our empirical results demonstrate strong performance in practice relative to existing solution methods.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"340 ","pages":"Article 104277"},"PeriodicalIF":5.1,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142925042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Gebser , Enrico Giunchiglia , Marco Maratea , Marco Mochi
{"title":"A simple proof-theoretic characterization of stable models: Reduction to difference logic and experiments","authors":"Martin Gebser , Enrico Giunchiglia , Marco Maratea , Marco Mochi","doi":"10.1016/j.artint.2024.104276","DOIUrl":"10.1016/j.artint.2024.104276","url":null,"abstract":"<div><div>Stable models of logic programs have been studied and characterized in relation with other formalisms by many researchers. As already argued in previous papers, such characterizations are interesting for diverse reasons, including theoretical investigations and the possibility of leading to new algorithms for computing stable models of logic programs. At the theoretical level, complexity and expressiveness comparisons have brought about fundamental insights. Beyond that, practical implementations of the developed reductions enable the use of existing solvers for other logical formalisms to compute stable models. In this paper, we first provide a simple characterization of stable models that can be viewed as a proof-theoretic counterpart of the standard model-theoretic definition. We further show how it can be naturally encoded in difference logic. Such an encoding, compared to the existing reductions to classical logics, does not require Boolean variables. Then, we implement our novel translation to a Satisfiability Modulo Theories (SMT) formula. We finally compare our approach, employing the SMT solver <span>yices</span>, to the translation-based ASP solver <span>lp2diff</span> and to <span>clingo</span> on domains from the “Basic Decision” track of the 2017 Answer Set Programming competition. The results show that our approach is competitive to and often better than <span>lp2diff</span>, and that it can also be faster than <span>clingo</span> on non-tight domains.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"340 ","pages":"Article 104276"},"PeriodicalIF":5.1,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142925043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Defying catastrophic forgetting via influence function","authors":"Rui Gao, Weiwei Liu","doi":"10.1016/j.artint.2024.104261","DOIUrl":"10.1016/j.artint.2024.104261","url":null,"abstract":"<div><div>Deep-learning models need to continually accumulate knowledge from tasks, given that the number of tasks are increasing overwhelmingly as the digital world evolves. However, standard deep-learning models are prone to forgetting about previously acquired skills when learning new ones. Fortunately, this catastrophic forgetting problem can be solved by means of continual learning. One popular approach in this vein is regularization-based method which penalizes parameters by giving their importance. However, a formal definition of parameter importance and theoretical analysis of regularization-based methods are elements that remain under-explored. In this paper, we first rigorously define the parameter importance by influence function, then unify the seminal methods (i.e., EWC, SI and MAS) into one whole framework. Two key theoretical results are presented in this work, and extensive experiments are conducted on standard benchmarks, which verify the superior performance of our proposed method.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"339 ","pages":"Article 104261"},"PeriodicalIF":5.1,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142744367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating symbolic reasoning into neural generative models for design generation","authors":"Maxwell J. Jacobson, Yexiang Xue","doi":"10.1016/j.artint.2024.104257","DOIUrl":"10.1016/j.artint.2024.104257","url":null,"abstract":"<div><div>Design generation requires tight integration of neural and symbolic reasoning, as good design must meet explicit user needs and honor implicit rules for aesthetics, utility, and convenience. Current automated design tools driven by neural networks produce appealing designs, but cannot satisfy user specifications and utility requirements. Symbolic reasoning tools, such as constraint programming, cannot perceive low-level visual information in images or capture subtle aspects such as aesthetics. We introduce the Spatial Reasoning Integrated Generator (SPRING) for design generation. SPRING embeds a neural and symbolic integrated spatial reasoning module inside the deep generative network. The spatial reasoning module samples the set of locations of objects to be generated from a backtrack-free distribution. This distribution modifies the implicit preference distribution, which is learned by a recurrent neural network to capture utility and aesthetics. The sampling from the backtrack-free distribution is accomplished by a symbolic reasoning approach, SampleSearch, which zeros out the probability of sampling spatial locations violating explicit user specifications. Embedding symbolic reasoning into neural generation guarantees that the output of SPRING satisfies user requirements. Furthermore, SPRING offers interpretability, allowing users to visualize and diagnose the generation process through the bounding boxes. SPRING is also adept at managing novel user specifications not encountered during its training, thanks to its proficiency in zero-shot constraint transfer. Quantitative evaluations and a human study reveal that SPRING outperforms baseline generative models, excelling in delivering high design quality and better meeting user specifications.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"339 ","pages":"Article 104257"},"PeriodicalIF":5.1,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142744366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leonardo Lamanna , Luciano Serafini , Alessandro Saetti , Alfonso Emilio Gerevini , Paolo Traverso
{"title":"Lifted action models learning from partial traces","authors":"Leonardo Lamanna , Luciano Serafini , Alessandro Saetti , Alfonso Emilio Gerevini , Paolo Traverso","doi":"10.1016/j.artint.2024.104256","DOIUrl":"10.1016/j.artint.2024.104256","url":null,"abstract":"<div><div>For applying symbolic planning, there is the necessity of providing the specification of a symbolic action model, which is usually manually specified by a domain expert. However, such an encoding may be faulty due to either human errors or lack of domain knowledge. Therefore, learning the symbolic action model in an automated way has been widely adopted as an alternative to its manual specification. In this paper, we focus on the problem of learning action models offline, from an input set of partially observable plan traces. In particular, we propose an approach to: <em>(i)</em> augment the observability of a given plan trace by applying predefined logical rules; <em>(ii)</em> learn the preconditions and effects of each action in a plan trace from partial observations before and after the action execution. We formally prove that our approach learns action models with fundamental theoretical properties, not provided by other methods. We experimentally show that our approach outperforms a state-of-the-art method on a large set of existing benchmark domains. Furthermore, we compare the effectiveness of the learned action models for solving planning problems and show that the action models learned by our approach are much more effective w.r.t. a state-of-the-art method.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"339 ","pages":"Article 104256"},"PeriodicalIF":5.1,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142643211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dino Pedreschi , Luca Pappalardo , Emanuele Ferragina , Ricardo Baeza-Yates , Albert-László Barabási , Frank Dignum , Virginia Dignum , Tina Eliassi-Rad , Fosca Giannotti , János Kertész , Alistair Knott , Yannis Ioannidis , Paul Lukowicz , Andrea Passarella , Alex Sandy Pentland , John Shawe-Taylor , Alessandro Vespignani
{"title":"Human-AI coevolution","authors":"Dino Pedreschi , Luca Pappalardo , Emanuele Ferragina , Ricardo Baeza-Yates , Albert-László Barabási , Frank Dignum , Virginia Dignum , Tina Eliassi-Rad , Fosca Giannotti , János Kertész , Alistair Knott , Yannis Ioannidis , Paul Lukowicz , Andrea Passarella , Alex Sandy Pentland , John Shawe-Taylor , Alessandro Vespignani","doi":"10.1016/j.artint.2024.104244","DOIUrl":"10.1016/j.artint.2024.104244","url":null,"abstract":"<div><div>Human-AI coevolution, defined as a process in which humans and AI algorithms continuously influence each other, increasingly characterises our society, but is understudied in artificial intelligence and complexity science literature. Recommender systems and assistants play a prominent role in human-AI coevolution, as they permeate many facets of daily life and influence human choices through online platforms. The interaction between users and AI results in a potentially endless feedback loop, wherein users' choices generate data to train AI models, which, in turn, shape subsequent user preferences. This human-AI feedback loop has peculiar characteristics compared to traditional human-machine interaction and gives rise to complex and often “unintended” systemic outcomes. This paper introduces human-AI coevolution as the cornerstone for a new field of study at the intersection between AI and complexity science focused on the theoretical, empirical, and mathematical investigation of the human-AI feedback loop. In doing so, we: <em>(i)</em> outline the pros and cons of existing methodologies and highlight shortcomings and potential ways for capturing feedback loop mechanisms; <em>(ii)</em> propose a reflection at the intersection between complexity science, AI and society; <em>(iii)</em> provide real-world examples for different human-AI ecosystems; and <em>(iv)</em> illustrate challenges to the creation of such a field of study, conceptualising them at increasing levels of abstraction, i.e., scientific, legal and socio-political.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"339 ","pages":"Article 104244"},"PeriodicalIF":5.1,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142643212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniil Kirilenko , Anton Andreychuk , Aleksandr I. Panov , Konstantin Yakovlev
{"title":"Generative models for grid-based and image-based pathfinding","authors":"Daniil Kirilenko , Anton Andreychuk , Aleksandr I. Panov , Konstantin Yakovlev","doi":"10.1016/j.artint.2024.104238","DOIUrl":"10.1016/j.artint.2024.104238","url":null,"abstract":"<div><div>Pathfinding is a challenging problem which generally asks to find a sequence of valid moves for an agent provided with a representation of the environment, i.e. a map, in which it operates. In this work, we consider pathfinding on binary grids and on image representations of the digital elevation models. In the former case, the transition costs are known, while in latter scenario, they are not. A widespread method to solve the first problem is to utilize a search algorithm that systematically explores the search space to obtain a solution. Ideally, the search should also be complemented with an informative heuristic to focus on the goal and prune the unpromising regions of the search space, thus decreasing the number of search iterations. Unfortunately, the widespread heuristic functions for grid-based pathfinding, such as Manhattan distance or Chebyshev distance, do not take the obstacles into account and in obstacle-rich environments demonstrate inefficient performance. As for pathfinding with image inputs, the heuristic search cannot be applied straightforwardly as the transition costs, i.e. the costs of moving from one pixel to the other, are not known. To tackle both challenges, we suggest utilizing modern deep neural networks to infer the instance-dependent heuristic functions at the pre-processing step and further use them for pathfinding with standard heuristic search algorithms. The principal heuristic function that we suggest learning is the path probability, which indicates how likely the grid cell (pixel) is lying on the shortest path (for binary grids with known transition costs, we also suggest another variant of the heuristic function that can speed up the search). Learning is performed in a supervised fashion (while we have also explored the possibilities of end-to-end learning that includes a planner in the learning pipeline). At the test time, path probability is used as the secondary heuristic for the Focal Search, a specific heuristic search algorithm that provides the theoretical guarantees on the cost bound of the resultant solution. Empirically, we show that the suggested approach significantly outperforms state-of-the-art competitors in a variety of different tasks (including out-of-the distribution instances).</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"338 ","pages":"Article 104238"},"PeriodicalIF":5.1,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142643214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Separate but equal: Equality in belief propagation for single-cycle graphs","authors":"Erel Cohen, Ben Rachmut, Omer Lev, Roie Zivan","doi":"10.1016/j.artint.2024.104243","DOIUrl":"10.1016/j.artint.2024.104243","url":null,"abstract":"<div><div>Belief propagation is a widely used, incomplete optimization algorithm whose main theoretical properties hold only under the assumption that beliefs are not equal. Nevertheless, there is substantial evidence to suggest that equality between beliefs does occur. A published method to overcome belief equality, which is based on the use of unary function-nodes, is commonly assumed to resolve the problem.</div><div>In this study, we focus on min-sum, the version of belief propagation that is used to solve constraint optimization problems. We prove that for the case of a single-cycle graph, belief equality can only be avoided when the algorithm converges to the optimal solution. Under any other circumstances, the unary function method will <em>not</em> prevent equality, indicating that some of the existing results presented in the literature are in need of reassessment. We differentiate between belief equality, which refers to equal beliefs in a single message, and assignment equality, which prevents the coherent assignment of values to the variables, and we provide conditions for both.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"338 ","pages":"Article 104243"},"PeriodicalIF":5.1,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142643213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martino Bernasconi, Matteo Castiglioni, Alberto Marchesi, Nicola Gatti, Francesco Trovò
{"title":"Online learning in sequential Bayesian persuasion: Handling unknown priors","authors":"Martino Bernasconi, Matteo Castiglioni, Alberto Marchesi, Nicola Gatti, Francesco Trovò","doi":"10.1016/j.artint.2024.104245","DOIUrl":"10.1016/j.artint.2024.104245","url":null,"abstract":"<div><div>We study a repeated <em>information design</em> problem faced by an informed <em>sender</em> who tries to influence the behavior of a self-interested <em>receiver</em>, through the provision of payoff-relevant information. We consider settings where the receiver repeatedly faces a <em>sequential decision making</em> (SDM) problem. At each round, the sender observes the realizations of random events in the SDM problem, which are only partially observable by the receiver. This begets the challenge of how to incrementally disclose such information to the receiver to <em>persuade</em> them to follow (desirable) action recommendations. We study the case in which the sender does <em>not</em> know random events probabilities, and, thus, they have to gradually learn them while persuading the receiver. We start by providing a non-trivial polytopal approximation of the set of the sender's persuasive information-revelation structures. This is crucial to design efficient learning algorithms. Next, we prove a negative result which also applies to the non-sequential case: <em>no learning algorithm can be persuasive in high probability</em>. Thus, we relax the persuasiveness requirement, studying algorithms that guarantee that the receiver's <em>regret</em> in following recommendations <em>grows sub-linearly</em>. In the <em>full-feedback</em> setting—where the sender observes the realizations of <em>all</em> the possible random events—, we provide an algorithm with <span><math><mover><mrow><mi>O</mi></mrow><mrow><mo>˜</mo></mrow></mover><mo>(</mo><msqrt><mrow><mi>T</mi></mrow></msqrt><mo>)</mo></math></span> regret for both the sender and the receiver. Instead, in the <em>bandit-feedback</em> setting—where the sender only observes the realizations of random events actually occurring in the SDM problem—, we design an algorithm that, given an <span><math><mi>α</mi><mo>∈</mo><mo>[</mo><mn>1</mn><mo>/</mo><mn>2</mn><mo>,</mo><mn>1</mn><mo>]</mo></math></span> as input, guarantees <span><math><mover><mrow><mi>O</mi></mrow><mrow><mo>˜</mo></mrow></mover><mo>(</mo><msup><mrow><mi>T</mi></mrow><mrow><mi>α</mi></mrow></msup><mo>)</mo></math></span> and <span><math><mover><mrow><mi>O</mi></mrow><mrow><mo>˜</mo></mrow></mover><mo>(</mo><msup><mrow><mi>T</mi></mrow><mrow><mi>max</mi><mo></mo><mo>{</mo><mi>α</mi><mo>,</mo><mn>1</mn><mo>−</mo><mfrac><mrow><mi>α</mi></mrow><mrow><mn>2</mn></mrow></mfrac><mo>}</mo></mrow></msup><mo>)</mo></math></span> regrets, for the sender and the receiver respectively. This result is complemented by a lower bound showing that such a regret trade-off is tight for <span><math><mi>α</mi><mo>∈</mo><mo>[</mo><mn>1</mn><mo>/</mo><mn>2</mn><mo>,</mo><mn>2</mn><mo>/</mo><mn>3</mn><mo>]</mo></math></span>.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"338 ","pages":"Article 104245"},"PeriodicalIF":5.1,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gyuhak Kim , Changnan Xiao , Tatsuya Konishi , Zixuan Ke , Bing Liu
{"title":"Open-world continual learning: Unifying novelty detection and continual learning","authors":"Gyuhak Kim , Changnan Xiao , Tatsuya Konishi , Zixuan Ke , Bing Liu","doi":"10.1016/j.artint.2024.104237","DOIUrl":"10.1016/j.artint.2024.104237","url":null,"abstract":"<div><div>As AI agents are increasingly used in the real open world with unknowns or novelties, they need the ability to (1) recognize objects that (a) they have learned before and (b) detect items that they have never seen or learned, and (2) learn the new items incrementally to become more and more knowledgeable and powerful. (1) is called <em>novelty detection</em> or <em>out-of-distribution</em> (OOD) <em>detection</em> and (2) is called <em>class incremental learning</em> (CIL), which is a setting of <em>continual learning</em> (CL). In existing research, OOD detection and CIL are regarded as two completely different problems. This paper first provides a theoretical proof that good OOD detection for each task within the set of learned tasks (called <em>closed-world OOD detection</em>) is <em>necessary</em> for successful CIL. We show this by decomposing CIL into two sub-problems: <em>within-task prediction</em> (WP) and <em>task-id prediction</em> (TP), and proving that TP is correlated with closed-world OOD detection. The <em>key theoretical result</em> is that regardless of whether WP and OOD detection (or TP) are defined explicitly or implicitly by a CIL algorithm, good WP and good closed-world OOD detection are <em>necessary</em> and <em>sufficient</em> conditions for good CIL, which unifies novelty or OOD detection and continual learning (CIL, in particular). We call this traditional CIL the <em>closed-world CIL</em> as it does not detect future OOD data in the open world. The paper then proves that the theory can be generalized or extended to <em>open-world CIL</em>, which is the proposed <em>open-world continual learning</em>, that can perform CIL in the open world and detect future or open-world OOD data. Based on the theoretical results, new CIL methods are also designed, which outperform strong baselines in CIL accuracy and in continual OOD detection by a large margin.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"338 ","pages":"Article 104237"},"PeriodicalIF":5.1,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}