Artificial Intelligence最新文献

筛选
英文 中文
Abstract argumentation frameworks with strong and weak constraints 具有强约束和弱约束的抽象论证框架
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-08-20 DOI: 10.1016/j.artint.2024.104205
Gianvincenzo Alfano, Sergio Greco, Domenico Mandaglio, Francesco Parisi, Irina Trubitsyna
{"title":"Abstract argumentation frameworks with strong and weak constraints","authors":"Gianvincenzo Alfano,&nbsp;Sergio Greco,&nbsp;Domenico Mandaglio,&nbsp;Francesco Parisi,&nbsp;Irina Trubitsyna","doi":"10.1016/j.artint.2024.104205","DOIUrl":"10.1016/j.artint.2024.104205","url":null,"abstract":"<div><p>Dealing with controversial information is an important issue in several application contexts. Formal argumentation enables reasoning on arguments for and against a claim to decide on an outcome. Dung's abstract Argumentation Framework (AF) has emerged as a central formalism in argument-based reasoning. Key aspects of the success and popularity of Dung's framework include its simplicity and expressiveness. Integrity constraints help to express domain knowledge in a compact and natural way, thus keeping easy the modeling task even for problems that otherwise would be hard to encode within an AF. In this paper, we first explore two intuitive semantics based on Kleene and Lukasiewicz logics, respectively, for AF augmented with (strong) constraints—the resulting argumentation framework is called Constrained AF (CAF). Then, we propose a new argumentation framework called Weak constrained AF (WAF) that enhances CAF with weak constraints. Intuitively, these constraints can be used to find “optimal” solutions to problems defined through CAF. We provide a detailed complexity analysis of CAF and WAF, showing that strong constraints do not increase the expressive power of AF in most cases, while weak constraints systematically increase the expressive power of CAF (and AF) under several well-known argumentation semantics.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104205"},"PeriodicalIF":5.1,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001413/pdfft?md5=4e6a89453bad3925cb46537261fd58a9&pid=1-s2.0-S0004370224001413-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142039953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bisimulation between base argumentation and premise-conclusion argumentation 基础论证和前提-结论论证之间的双向模拟
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-08-20 DOI: 10.1016/j.artint.2024.104203
Jinsheng Chen , Beishui Liao , Leendert van der Torre
{"title":"Bisimulation between base argumentation and premise-conclusion argumentation","authors":"Jinsheng Chen ,&nbsp;Beishui Liao ,&nbsp;Leendert van der Torre","doi":"10.1016/j.artint.2024.104203","DOIUrl":"10.1016/j.artint.2024.104203","url":null,"abstract":"<div><p>The structured argumentation system that represents arguments by premise-conclusion pairs is called <em>premise-conclusion argumentation</em> (PA) and the one that represents arguments by their premises is called <em>base argumentation</em> (BA). To assess whether BA and PA have the same ability in argument evaluation by extensional semantics, this paper defines the notion of <em>extensional equivalence</em> between BA and PA. It also defines the notion of <em>bisimulation</em> between BA and PA and shows that bisimulation implies extensional equivalence. To illustrate how base argumentation, bisimulation and extensional equivalence can contribute to the study of PA, we prove some new results about PA by investigating the extensional properties of a base argumentation framework and exporting them to two premise-conclusion argumentation frameworks via bisimulation and extensional equivalence. We show that there are essentially three kinds of extensions in these frameworks and that the extensions in the two premise-conclusion argumentation frameworks are identical.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104203"},"PeriodicalIF":5.1,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142020507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On generalized notions of consistency and reinstatement and their preservation in formal argumentation 论一致性和恢复性的一般概念及其在形式论证中的保持
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-08-18 DOI: 10.1016/j.artint.2024.104202
Pietro Baroni , Federico Cerutti , Massimiliano Giacomin
{"title":"On generalized notions of consistency and reinstatement and their preservation in formal argumentation","authors":"Pietro Baroni ,&nbsp;Federico Cerutti ,&nbsp;Massimiliano Giacomin","doi":"10.1016/j.artint.2024.104202","DOIUrl":"10.1016/j.artint.2024.104202","url":null,"abstract":"<div><p>We present a conceptualization providing an original domain-independent perspective on two crucial properties in reasoning: consistency and reinstatement. They emerge as a pair of dual characteristics, representing complementary requirements on the outcomes of reasoning processes. Central to our formalization are two underlying parametric relations: incompatibility and reinstatement violation. Different instances of these relations give rise to a spectrum of consistency and reinstatement scenarios. As a demonstration of versatility and expressive power of our approach we provide a characterization of various abstract argumentation semantics which are expressed as combinations of distinct consistency and reinstatement constraints. Moreover, we conduct an investigation into preserving these essential properties across different reasoning stages. Specifically, we delve into scenarios where a labelling is derived from other labellings through a synthesis function, using the synthesis of argument justification as an illustrative instance. We achieve a general characterization of consistency preservation synthesis functions, while we unveil an impossibility result concerning reinstatement preservation, leading us to explore an alternative notion to ensure feasibility. Our exploration reveals a weakness in the traditional definition of argument justification, for which we propose a refined version overcoming this limitation.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104202"},"PeriodicalIF":5.1,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001383/pdfft?md5=2786e6cc7312f2e6c76d2f95b9cdcee1&pid=1-s2.0-S0004370224001383-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142012117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing maximization bias in reinforcement learning with two-sample testing 用双样本测试解决强化学习中的最大化偏差
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-08-16 DOI: 10.1016/j.artint.2024.104204
Martin Waltz , Ostap Okhrin
{"title":"Addressing maximization bias in reinforcement learning with two-sample testing","authors":"Martin Waltz ,&nbsp;Ostap Okhrin","doi":"10.1016/j.artint.2024.104204","DOIUrl":"10.1016/j.artint.2024.104204","url":null,"abstract":"<div><p>Value-based reinforcement-learning algorithms have shown strong results in games, robotics, and other real-world applications. Overestimation bias is a known threat to those algorithms and can sometimes lead to dramatic performance decreases or even complete algorithmic failure. We frame the bias problem statistically and consider it an instance of estimating the maximum expected value (MEV) of a set of random variables. We propose the <em>T</em>-Estimator (TE) based on two-sample testing for the mean, that flexibly interpolates between over- and underestimation by adjusting the significance level of the underlying hypothesis tests. We also introduce a generalization, termed <em>K</em>-Estimator (KE), that obeys the same bias and variance bounds as the TE and relies on a nearly arbitrary kernel function. We introduce modifications of <em>Q</em>-Learning and the Bootstrapped Deep <em>Q</em>-Network (BDQN) using the TE and the KE, and prove convergence in the tabular setting. Furthermore, we propose an adaptive variant of the TE-based BDQN that dynamically adjusts the significance level to minimize the absolute estimation bias. All proposed estimators and algorithms are thoroughly tested and validated on diverse tasks and environments, illustrating the bias control and performance potential of the TE and KE.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104204"},"PeriodicalIF":5.1,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001401/pdfft?md5=5b6841aff0d8d49b8cc40332377d2f38&pid=1-s2.0-S0004370224001401-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142020506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modular control architecture for safe marine navigation: Reinforcement learning with predictive safety filters 用于海上安全航行的模块化控制架构:带有预测性安全过滤器的强化学习
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-08-13 DOI: 10.1016/j.artint.2024.104201
Aksel Vaaler , Svein Jostein Husa , Daniel Menges , Thomas Nakken Larsen , Adil Rasheed
{"title":"Modular control architecture for safe marine navigation: Reinforcement learning with predictive safety filters","authors":"Aksel Vaaler ,&nbsp;Svein Jostein Husa ,&nbsp;Daniel Menges ,&nbsp;Thomas Nakken Larsen ,&nbsp;Adil Rasheed","doi":"10.1016/j.artint.2024.104201","DOIUrl":"10.1016/j.artint.2024.104201","url":null,"abstract":"<div><p>Many autonomous systems are safety-critical, making it essential to have a closed-loop control system that satisfies constraints arising from underlying physical limitations and safety aspects in a robust manner. However, this is often challenging to achieve for real-world systems. For example, autonomous ships at sea have nonlinear and uncertain dynamics and are subject to numerous time-varying environmental disturbances such as waves, currents, and wind. There is increasing interest in using machine learning-based approaches to adapt these systems to more complex scenarios, but there are few standard frameworks that guarantee the safety and stability of such systems. Recently, predictive safety filters (PSF) have emerged as a promising method to ensure constraint satisfaction in learning-based control, bypassing the need for explicit constraint handling in the learning algorithms themselves. The safety filter approach leads to a modular separation of the problem, allowing the use of arbitrary control policies in a task-agnostic way. The filter takes in a potentially unsafe control action from the main controller and solves an optimization problem to compute a minimal perturbation of the proposed action that adheres to both physical and safety constraints. In this work, we combine reinforcement learning (RL) with predictive safety filtering in the context of marine navigation and control. The RL agent is trained on path-following and safety adherence across a wide range of randomly generated environments, while the predictive safety filter continuously monitors the agents' proposed control actions and modifies them if necessary. The combined PSF/RL scheme is implemented on a simulated model of Cybership II, a miniature replica of a typical supply ship. Safety performance and learning rate are evaluated and compared with those of a standard, non-PSF, RL agent. It is demonstrated that the predictive safety filter is able to keep the vessel safe, while not prohibiting the learning rate and performance of the RL agent.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104201"},"PeriodicalIF":5.1,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001371/pdfft?md5=32cb7040f174b219329c813dbac41fde&pid=1-s2.0-S0004370224001371-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141985125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QCDCL with cube learning or pure literal elimination – What is best? 带有立方体学习功能的 QCDCL 或纯粹的字面排除 - 哪种方法最好?
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-08-08 DOI: 10.1016/j.artint.2024.104194
Benjamin Böhm , Tomáš Peitl , Olaf Beyersdorff
{"title":"QCDCL with cube learning or pure literal elimination – What is best?","authors":"Benjamin Böhm ,&nbsp;Tomáš Peitl ,&nbsp;Olaf Beyersdorff","doi":"10.1016/j.artint.2024.104194","DOIUrl":"10.1016/j.artint.2024.104194","url":null,"abstract":"<div><p>Quantified conflict-driven clause learning (QCDCL) is one of the main approaches for solving quantified Boolean formulas (QBF). We formalise and investigate several versions of QCDCL that include cube learning and/or pure-literal elimination, and formally compare the resulting solving variants via proof complexity techniques. Our results show that almost all of the QCDCL variants are exponentially incomparable with respect to proof size (and hence solver running time), pointing towards different orthogonal ways how to practically implement QCDCL.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104194"},"PeriodicalIF":5.1,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001309/pdfft?md5=5239acd648349c514fda83a672a66c32&pid=1-s2.0-S0004370224001309-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying roles of formulas in inconsistency under Priest's minimally inconsistent logic of paradox 在普里斯特的悖论最小不一致逻辑中确定公式在不一致中的作用
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-08-05 DOI: 10.1016/j.artint.2024.104199
Kedian Mu
{"title":"Identifying roles of formulas in inconsistency under Priest's minimally inconsistent logic of paradox","authors":"Kedian Mu","doi":"10.1016/j.artint.2024.104199","DOIUrl":"10.1016/j.artint.2024.104199","url":null,"abstract":"<div><p>It has been increasingly recognized that identifying roles of formulas of a knowledge base in the inconsistency of that base can help us better look inside the inconsistency. However, there are few approaches to identifying such roles of formulas from a perspective of models in some paraconsistent logic, one of typical tools used to characterize inconsistency in semantics. In this paper, we characterize the role of each formula in the inconsistency arising in a knowledge base from informational as well as causal aspects in the framework of Priest's minimally inconsistent logic of paradox. At first, we identify the causal responsibility of a formula for the inconsistency based on the counterfactual dependence of the inconsistency on the formula under some contingency in semantics. Then we incorporate the change on semantic information in the framework of causal responsibility to develop the informational responsibility of a formula for the inconsistency to capture the contribution made by the formula for the inconsistent information. This incorporation makes the informational responsibility interpretable from the point of view of causality, and capable of catching the role of a formula in inconsistent information concisely. In addition, we propose notions of naive and quasi naive responsibilities as two auxiliaries to describe special relations between inconsistency and formulas in semantic sense. Some intuitive and interesting properties of the two kinds of responsibilities are also discussed.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"335 ","pages":"Article 104199"},"PeriodicalIF":5.1,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Representing states in iterated belief revision 在迭代信念修正中表示状态
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-08-05 DOI: 10.1016/j.artint.2024.104200
Paolo Liberatore
{"title":"Representing states in iterated belief revision","authors":"Paolo Liberatore","doi":"10.1016/j.artint.2024.104200","DOIUrl":"10.1016/j.artint.2024.104200","url":null,"abstract":"<div><p>Iterated belief revision requires information about the current beliefs. This information is represented by mathematical structures called doxastic states. Most literature concentrates on how to revise a doxastic state and neglects that it may exponentially grow. This problem is studied for the most common ways of storing a doxastic state. All four of them are able to store every doxastic state, but some do it in less space than others. In particular, the explicit representation (an enumeration of the current beliefs) is the more wasteful on space. The level representation (a sequence of propositional formulae) and the natural representation (a history of natural revisions) are more succinct than it. The lexicographic representation (a history of lexicographic revision) is even more succinct than them.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104200"},"PeriodicalIF":5.1,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On measuring inconsistency in graph databases with regular path constraints 关于利用规则路径约束测量图数据库中的不一致性
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-08-02 DOI: 10.1016/j.artint.2024.104197
John Grant , Francesco Parisi
{"title":"On measuring inconsistency in graph databases with regular path constraints","authors":"John Grant ,&nbsp;Francesco Parisi","doi":"10.1016/j.artint.2024.104197","DOIUrl":"10.1016/j.artint.2024.104197","url":null,"abstract":"<div><p>Real-world data are often inconsistent. Although a substantial amount of research has been done on measuring inconsistency, this research concentrated on knowledge bases formalized in propositional logic. Recently, inconsistency measures have been introduced for relational databases. However, nowadays, real-world information is always more frequently represented by graph-based structures which offer a more intuitive conceptualization than relational ones. In this paper, we explore inconsistency measures for graph databases with regular path constraints, a class of integrity constraints based on a well-known navigational language for graph data. In this context, we define several inconsistency measures dealing with specific elements contributing to inconsistency in graph databases. We also define some rationality postulates that are desirable properties for an inconsistency measure for graph databases. We analyze the compliance of each measure with each postulate and find various degrees of satisfaction; in fact, one of the measures satisfies all the postulates. Finally, we investigate the data and combined complexity of the calculation of all the measures as well as the complexity of deciding whether a measure is lower than, equal to, or greater than a given threshold. It turns out that for a majority of the measures these problems are tractable, while for the other different levels of intractability are exhibited.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"335 ","pages":"Article 104197"},"PeriodicalIF":5.1,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001334/pdfft?md5=113adf90619058fb60d34c4ed866c0e0&pid=1-s2.0-S0004370224001334-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sample-based bounds for coherent risk measures: Applications to policy synthesis and verification 基于样本的一致性风险度量界限:政策综合与验证的应用
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-08-02 DOI: 10.1016/j.artint.2024.104195
Prithvi Akella, Anushri Dixit, Mohamadreza Ahmadi, Joel W. Burdick, Aaron D. Ames
{"title":"Sample-based bounds for coherent risk measures: Applications to policy synthesis and verification","authors":"Prithvi Akella,&nbsp;Anushri Dixit,&nbsp;Mohamadreza Ahmadi,&nbsp;Joel W. Burdick,&nbsp;Aaron D. Ames","doi":"10.1016/j.artint.2024.104195","DOIUrl":"10.1016/j.artint.2024.104195","url":null,"abstract":"<div><p>Autonomous systems are increasingly used in highly variable and uncertain environments giving rise to the pressing need to consider risk in both the synthesis and verification of policies for these systems. This paper first develops a sample-based method to upper bound the risk measure evaluation of a random variable whose distribution is unknown. These bounds permit us to generate high-confidence verification statements for a large class of robotic systems in a sample-efficient manner. Second, we develop a sample-based method to determine solutions to non-convex optimization problems that outperform a large fraction of the decision space of possible solutions. Both sample-based approaches then permit us to rapidly synthesize risk-aware policies that are guaranteed to achieve a minimum level of system performance. To showcase our approach in simulation, we verify a cooperative multi-agent system and develop a risk-aware controller that outperforms the system's baseline controller. Our approach can be extended to account for any <em>g</em>-entropic risk measure.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104195"},"PeriodicalIF":5.1,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142012116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信