多智能体论证逻辑中的论证评估

Log. J. IGPL Pub Date : 2019-12-23 DOI:10.1093/jigpal/jzz046
A. Burrieza, Antonio Yuste-Ginel
{"title":"多智能体论证逻辑中的论证评估","authors":"A. Burrieza, Antonio Yuste-Ginel","doi":"10.1093/jigpal/jzz046","DOIUrl":null,"url":null,"abstract":"\n Argument evaluation , one of the central problems in argumentation theory, consists in studying what makes an argument a good one. This paper proposes a formal approach to argument evaluation from the perspective of justification logic. We adopt a multi-agent setting, accepting the intuitive idea that arguments are always evaluated by someone. Two general restrictions are imposed on our analysis: non-deductive arguments are left out and the goal of argument evaluation is fixed: supporting a given proposition. Methodologically, our approach uses several existing tools borrowed from justification logic, awareness logic, doxastic logic and logics for belief dependence. We start by introducing a basic logic for argument evaluation, where a list of argumentative and doxastic notions can be expressed. Later on, we discuss how to capture the mentioned form of argument evaluation by defining a preference operator in the object language. The intuitive picture behind this definition is that, when assessing a couple of arguments, the agent puts them to a test consisting of several criteria (filters). As a result of this process, a preference relation among the evaluated arguments is established by the agent. After showing that this operator suffers a special form of logical omniscience, called preferential omniscience, we discuss how to define an explicit version of it, more suitable to deal with non-ideal agents. The present work exploits the formal notion of awareness in order to model several informal phenomena: awareness of sentences, availability of arguments and communication between agents and external sources (advisers). We discuss several extensions of the basic logic and offer completeness and decidability results for all of them.","PeriodicalId":304915,"journal":{"name":"Log. J. IGPL","volume":"54 14","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Argument evaluation in multi-agent justification logics\",\"authors\":\"A. Burrieza, Antonio Yuste-Ginel\",\"doi\":\"10.1093/jigpal/jzz046\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Argument evaluation , one of the central problems in argumentation theory, consists in studying what makes an argument a good one. This paper proposes a formal approach to argument evaluation from the perspective of justification logic. We adopt a multi-agent setting, accepting the intuitive idea that arguments are always evaluated by someone. Two general restrictions are imposed on our analysis: non-deductive arguments are left out and the goal of argument evaluation is fixed: supporting a given proposition. Methodologically, our approach uses several existing tools borrowed from justification logic, awareness logic, doxastic logic and logics for belief dependence. We start by introducing a basic logic for argument evaluation, where a list of argumentative and doxastic notions can be expressed. Later on, we discuss how to capture the mentioned form of argument evaluation by defining a preference operator in the object language. The intuitive picture behind this definition is that, when assessing a couple of arguments, the agent puts them to a test consisting of several criteria (filters). As a result of this process, a preference relation among the evaluated arguments is established by the agent. After showing that this operator suffers a special form of logical omniscience, called preferential omniscience, we discuss how to define an explicit version of it, more suitable to deal with non-ideal agents. The present work exploits the formal notion of awareness in order to model several informal phenomena: awareness of sentences, availability of arguments and communication between agents and external sources (advisers). We discuss several extensions of the basic logic and offer completeness and decidability results for all of them.\",\"PeriodicalId\":304915,\"journal\":{\"name\":\"Log. J. IGPL\",\"volume\":\"54 14\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Log. J. IGPL\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/jigpal/jzz046\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Log. J. IGPL","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/jigpal/jzz046","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

论证评价是论证理论的核心问题之一,其实质是研究如何使论证成为一个好的论证。本文从论证逻辑的角度提出了一种论证评价的形式化方法。我们采用了多智能体设置,接受了一个直观的观点,即争论总是由某人评估的。我们的分析有两个普遍的限制:非演绎论证被忽略了,论证评价的目标是固定的:支持一个给定的命题。在方法上,我们的方法使用了从证明逻辑、意识逻辑、谬论逻辑和信仰依赖逻辑中借来的几种现有工具。我们首先介绍论证评估的基本逻辑,其中可以表达一系列论证性和矛盾性概念。稍后,我们将讨论如何通过在对象语言中定义首选操作符来捕获上述形式的参数求值。这个定义背后的直观图景是,当评估两个参数时,代理将它们置于由几个标准(过滤器)组成的测试中。在这个过程中,智能体建立了评价参数之间的偏好关系。在证明该算子具有一种特殊形式的逻辑全知(称为优先全知)之后,我们讨论了如何定义它的显式版本,更适合于处理非理想代理。目前的工作利用意识的正式概念来模拟几个非正式现象:对句子的意识,论点的可用性以及代理和外部来源(顾问)之间的沟通。讨论了基本逻辑的几种扩展,并给出了它们的完备性和可判定性结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Argument evaluation in multi-agent justification logics
Argument evaluation , one of the central problems in argumentation theory, consists in studying what makes an argument a good one. This paper proposes a formal approach to argument evaluation from the perspective of justification logic. We adopt a multi-agent setting, accepting the intuitive idea that arguments are always evaluated by someone. Two general restrictions are imposed on our analysis: non-deductive arguments are left out and the goal of argument evaluation is fixed: supporting a given proposition. Methodologically, our approach uses several existing tools borrowed from justification logic, awareness logic, doxastic logic and logics for belief dependence. We start by introducing a basic logic for argument evaluation, where a list of argumentative and doxastic notions can be expressed. Later on, we discuss how to capture the mentioned form of argument evaluation by defining a preference operator in the object language. The intuitive picture behind this definition is that, when assessing a couple of arguments, the agent puts them to a test consisting of several criteria (filters). As a result of this process, a preference relation among the evaluated arguments is established by the agent. After showing that this operator suffers a special form of logical omniscience, called preferential omniscience, we discuss how to define an explicit version of it, more suitable to deal with non-ideal agents. The present work exploits the formal notion of awareness in order to model several informal phenomena: awareness of sentences, availability of arguments and communication between agents and external sources (advisers). We discuss several extensions of the basic logic and offer completeness and decidability results for all of them.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信