{"title":"非优先信念修正的语义方法","authors":"Elise Perrotin, F. R. Velázquez-Quesada","doi":"10.1093/jigpal/jzz045","DOIUrl":null,"url":null,"abstract":"\n Belief revision is concerned with belief change fired by incoming information. Despite the variety of frameworks representing it, most revision policies share one crucial feature: incoming information outweighs current information and hence, in case of conflict, incoming information will prevail. However, if one is interested in representing the way actual humans revise their beliefs, one might not always want for the agent to blindly believe everything they are told. This manuscript presents a semantic approach to non-prioritized belief revision. It uses plausibility models for depicting an agent’s beliefs, and model operations for displaying the way beliefs change. The first proposal, semantically-based screened revision, compares the current model with the one the revision would yield, accepting or rejecting the incoming information depending on whether the ‘differences’ between these models go beyond a given threshold. The second proposal, semantically-based gradual revision, turns the binary decision of acceptance or rejection into a more general setting in which a revision always occurs, with the threshold used rather to choose ‘the right revision’ for the given input and model.","PeriodicalId":304915,"journal":{"name":"Log. J. IGPL","volume":"84 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A Semantic Approach to Non-prioritized Belief Revision\",\"authors\":\"Elise Perrotin, F. R. Velázquez-Quesada\",\"doi\":\"10.1093/jigpal/jzz045\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Belief revision is concerned with belief change fired by incoming information. Despite the variety of frameworks representing it, most revision policies share one crucial feature: incoming information outweighs current information and hence, in case of conflict, incoming information will prevail. However, if one is interested in representing the way actual humans revise their beliefs, one might not always want for the agent to blindly believe everything they are told. This manuscript presents a semantic approach to non-prioritized belief revision. It uses plausibility models for depicting an agent’s beliefs, and model operations for displaying the way beliefs change. The first proposal, semantically-based screened revision, compares the current model with the one the revision would yield, accepting or rejecting the incoming information depending on whether the ‘differences’ between these models go beyond a given threshold. The second proposal, semantically-based gradual revision, turns the binary decision of acceptance or rejection into a more general setting in which a revision always occurs, with the threshold used rather to choose ‘the right revision’ for the given input and model.\",\"PeriodicalId\":304915,\"journal\":{\"name\":\"Log. J. IGPL\",\"volume\":\"84 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Log. J. IGPL\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/jigpal/jzz045\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Log. J. IGPL","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/jigpal/jzz045","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Semantic Approach to Non-prioritized Belief Revision
Belief revision is concerned with belief change fired by incoming information. Despite the variety of frameworks representing it, most revision policies share one crucial feature: incoming information outweighs current information and hence, in case of conflict, incoming information will prevail. However, if one is interested in representing the way actual humans revise their beliefs, one might not always want for the agent to blindly believe everything they are told. This manuscript presents a semantic approach to non-prioritized belief revision. It uses plausibility models for depicting an agent’s beliefs, and model operations for displaying the way beliefs change. The first proposal, semantically-based screened revision, compares the current model with the one the revision would yield, accepting or rejecting the incoming information depending on whether the ‘differences’ between these models go beyond a given threshold. The second proposal, semantically-based gradual revision, turns the binary decision of acceptance or rejection into a more general setting in which a revision always occurs, with the threshold used rather to choose ‘the right revision’ for the given input and model.