Leonardo Lamanna , Luciano Serafini , Alessandro Saetti , Alfonso Emilio Gerevini , Paolo Traverso
{"title":"从部分轨迹学习的提升行动模型","authors":"Leonardo Lamanna , Luciano Serafini , Alessandro Saetti , Alfonso Emilio Gerevini , Paolo Traverso","doi":"10.1016/j.artint.2024.104256","DOIUrl":null,"url":null,"abstract":"<div><div>For applying symbolic planning, there is the necessity of providing the specification of a symbolic action model, which is usually manually specified by a domain expert. However, such an encoding may be faulty due to either human errors or lack of domain knowledge. Therefore, learning the symbolic action model in an automated way has been widely adopted as an alternative to its manual specification. In this paper, we focus on the problem of learning action models offline, from an input set of partially observable plan traces. In particular, we propose an approach to: <em>(i)</em> augment the observability of a given plan trace by applying predefined logical rules; <em>(ii)</em> learn the preconditions and effects of each action in a plan trace from partial observations before and after the action execution. We formally prove that our approach learns action models with fundamental theoretical properties, not provided by other methods. We experimentally show that our approach outperforms a state-of-the-art method on a large set of existing benchmark domains. Furthermore, we compare the effectiveness of the learned action models for solving planning problems and show that the action models learned by our approach are much more effective w.r.t. a state-of-the-art method.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"339 ","pages":"Article 104256"},"PeriodicalIF":5.1000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Lifted action models learning from partial traces\",\"authors\":\"Leonardo Lamanna , Luciano Serafini , Alessandro Saetti , Alfonso Emilio Gerevini , Paolo Traverso\",\"doi\":\"10.1016/j.artint.2024.104256\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>For applying symbolic planning, there is the necessity of providing the specification of a symbolic action model, which is usually manually specified by a domain expert. However, such an encoding may be faulty due to either human errors or lack of domain knowledge. Therefore, learning the symbolic action model in an automated way has been widely adopted as an alternative to its manual specification. In this paper, we focus on the problem of learning action models offline, from an input set of partially observable plan traces. In particular, we propose an approach to: <em>(i)</em> augment the observability of a given plan trace by applying predefined logical rules; <em>(ii)</em> learn the preconditions and effects of each action in a plan trace from partial observations before and after the action execution. We formally prove that our approach learns action models with fundamental theoretical properties, not provided by other methods. We experimentally show that our approach outperforms a state-of-the-art method on a large set of existing benchmark domains. Furthermore, we compare the effectiveness of the learned action models for solving planning problems and show that the action models learned by our approach are much more effective w.r.t. a state-of-the-art method.<span><span><sup>1</sup></span></span></div></div>\",\"PeriodicalId\":8434,\"journal\":{\"name\":\"Artificial Intelligence\",\"volume\":\"339 \",\"pages\":\"Article 104256\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2024-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0004370224001929\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0004370224001929","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
For applying symbolic planning, there is the necessity of providing the specification of a symbolic action model, which is usually manually specified by a domain expert. However, such an encoding may be faulty due to either human errors or lack of domain knowledge. Therefore, learning the symbolic action model in an automated way has been widely adopted as an alternative to its manual specification. In this paper, we focus on the problem of learning action models offline, from an input set of partially observable plan traces. In particular, we propose an approach to: (i) augment the observability of a given plan trace by applying predefined logical rules; (ii) learn the preconditions and effects of each action in a plan trace from partial observations before and after the action execution. We formally prove that our approach learns action models with fundamental theoretical properties, not provided by other methods. We experimentally show that our approach outperforms a state-of-the-art method on a large set of existing benchmark domains. Furthermore, we compare the effectiveness of the learned action models for solving planning problems and show that the action models learned by our approach are much more effective w.r.t. a state-of-the-art method.1
期刊介绍:
The Journal of Artificial Intelligence (AIJ) welcomes papers covering a broad spectrum of AI topics, including cognition, automated reasoning, computer vision, machine learning, and more. Papers should demonstrate advancements in AI and propose innovative approaches to AI problems. Additionally, the journal accepts papers describing AI applications, focusing on how new methods enhance performance rather than reiterating conventional approaches. In addition to regular papers, AIJ also accepts Research Notes, Research Field Reviews, Position Papers, Book Reviews, and summary papers on AI challenges and competitions.