Artificial Intelligence最新文献

筛选
英文 中文
Multi-rank smart reserves: A general framework for selection and matching diversity goals 多等级智能储备:选择和匹配多样性目标的总体框架
IF 14.4 2区 计算机科学
Artificial Intelligence Pub Date : 2024-12-16 DOI: 10.1016/j.artint.2024.104274
Haris Aziz, Zhaohong Sun
{"title":"Multi-rank smart reserves: A general framework for selection and matching diversity goals","authors":"Haris Aziz, Zhaohong Sun","doi":"10.1016/j.artint.2024.104274","DOIUrl":"https://doi.org/10.1016/j.artint.2024.104274","url":null,"abstract":"We study a problem where each school has flexible multi-ranked diversity goals, and each student may belong to multiple overlapping types, and consumes only one of the positions reserved for their types. We propose a novel choice function for a school to select students and show that it is the unique rule that satisfies three fundamental properties: maximal diversity, non-wastefulness, and justified envy-freeness. We provide a fast polynomial-time algorithm for our choice function that is based on the Dulmage Mendelsohn Decomposition Theorem as well as new insights into the combinatorial structure of constrained rank maximal matchings. Even for the case of minimum and maximum quotas for types (that capture two ranks), ours is the first known polynomial-time approach to compute an optimally diverse choice outcome. Finally, we prove that the choice function we design for schools, satisfies substitutability and hence can be directly embedded in the generalized deferred acceptance algorithm to achieve strategyproofness and stability. Our algorithms and results have immediate policy implications and directly apply to a variety of scenarios, such as where hiring positions or scarce medical resources need to be allocated while taking into account diversity concerns or ethical principles.","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"31 1","pages":""},"PeriodicalIF":14.4,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142867651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Out-of-distribution detection by regaining lost clues
IF 14.4 2区 计算机科学
Artificial Intelligence Pub Date : 2024-12-13 DOI: 10.1016/j.artint.2024.104275
Zhilin Zhao, Longbing Cao, Philip S. Yu
{"title":"Out-of-distribution detection by regaining lost clues","authors":"Zhilin Zhao, Longbing Cao, Philip S. Yu","doi":"10.1016/j.artint.2024.104275","DOIUrl":"https://doi.org/10.1016/j.artint.2024.104275","url":null,"abstract":"Out-of-distribution (OOD) detection identifies samples in the test phase that are drawn from distributions distinct from that of training in-distribution (ID) samples for a trained network. According to the information bottleneck, networks that classify tabular data tend to extract labeling information from features with strong associations to ground-truth labels, discarding less relevant labeling cues. This behavior leads to a predicament in which OOD samples with limited labeling information receive high-confidence predictions, rendering the network incapable of distinguishing between ID and OOD samples. Hence, exploring more labeling information from ID samples, which makes it harder for an OOD sample to obtain high-confidence predictions, can address this over-confidence issue on tabular data. Accordingly, we propose a novel transformer chain (TC), which comprises a sequence of dependent transformers that iteratively regain discarded labeling information and integrate all the labeling information to enhance OOD detection. The generalization bound theoretically reveals that TC can balance ID generalization and OOD detection capabilities. Experimental results demonstrate that TC significantly surpasses state-of-the-art methods for OOD detection in tabular data.","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"7 1","pages":""},"PeriodicalIF":14.4,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142867652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formal verification and synthesis of mechanisms for social choice
IF 14.4 2区 计算机科学
Artificial Intelligence Pub Date : 2024-12-10 DOI: 10.1016/j.artint.2024.104272
Munyque Mittelmann, Bastien Maubert, Aniello Murano, Laurent Perrussel
{"title":"Formal verification and synthesis of mechanisms for social choice","authors":"Munyque Mittelmann, Bastien Maubert, Aniello Murano, Laurent Perrussel","doi":"10.1016/j.artint.2024.104272","DOIUrl":"https://doi.org/10.1016/j.artint.2024.104272","url":null,"abstract":"Mechanism Design (MD) aims at defining resources allocation protocols that satisfy a predefined set of properties, and Auction Mechanisms are of foremost importance. Core properties of mechanisms, such as strategy-proofness or budget balance, involve: (i) complex strategic concepts such as Nash equilibria, (ii) quantitative aspects such as utilities, and often (iii) imperfect information, with agents' private valuations. We demonstrate that Strategy Logic provides a formal framework fit to model mechanisms and express such properties, and we show that it can be used either to automatically check that a given mechanism satisfies some property (verification), or automatically produce a mechanism that does (synthesis). To do so, we consider a quantitative and variant of Strategy Logic. We first show how to express the implementation of social choice functions. Second, we show how fundamental mechanism properties can be expressed as logical formulas, and thus evaluated by model checking. We then prove that model checking for this particular variant of Strategy Logic can be done in polynomial space. Next, we show how MD can be rephrased as a synthesis problem, where mechanisms are automatically synthesized from a partial or complete logical specification. We solve the automated synthesis of mechanisms in two cases: when the number of actions is bounded, and when agents play in turns. Finally, we provide examples of auction design based for each of these two cases. The benefit of our approach in relation to classical MD is to provide a general framework for addressing a large spectrum of MD problems, which is not tailored to a particular setting or problem.","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"20 1","pages":""},"PeriodicalIF":14.4,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EMOA*: A framework for search-based multi-objective path planning
IF 14.4 2区 计算机科学
Artificial Intelligence Pub Date : 2024-12-02 DOI: 10.1016/j.artint.2024.104260
Zhongqiang Ren, Carlos Hernández, Maxim Likhachev, Ariel Felner, Sven Koenig, Oren Salzman, Sivakumar Rathinam, Howie Choset
{"title":"EMOA*: A framework for search-based multi-objective path planning","authors":"Zhongqiang Ren, Carlos Hernández, Maxim Likhachev, Ariel Felner, Sven Koenig, Oren Salzman, Sivakumar Rathinam, Howie Choset","doi":"10.1016/j.artint.2024.104260","DOIUrl":"https://doi.org/10.1016/j.artint.2024.104260","url":null,"abstract":"In the Multi-Objective Shortest Path Problem (MO-SPP), one has to find paths on a graph that simultaneously minimize multiple objectives. It is not guaranteed that there exists a path that minimizes all objectives, and the problem thus aims to find the set of Pareto-optimal paths from the start to the goal vertex. A variety of multi-objective A*-based search approaches have been developed for this purpose. Typically, these approaches maintain a front set at each vertex during the search process to keep track of the Pareto-optimal paths that reach that vertex. Maintaining these front sets becomes burdensome and often slows down the search when there are many Pareto-optimal paths. In this article, we first introduce a framework for MO-SPP with the key procedures related to the front sets abstracted and highlighted, which provides a novel perspective for understanding the existing multi-objective A*-based search algorithms. Within this framework, we develop two different, yet closely related approaches to maintain these front sets efficiently during the search. We show that our approaches can find all cost-unique Pareto-optimal paths, and analyze their runtime complexity. We implement the approaches and compare them against baselines using instances with three, four and five objectives. Our experimental results show that our approaches run up to an order of magnitude faster than the baselines.","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"18 1","pages":""},"PeriodicalIF":14.4,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142788886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A simple yet effective self-debiasing framework for transformer models
IF 14.4 2区 计算机科学
Artificial Intelligence Pub Date : 2024-12-02 DOI: 10.1016/j.artint.2024.104258
Xiaoyue Wang, Xin Liu, Lijie Wang, Suhang Wu, Jinsong Su, Hua Wu
{"title":"A simple yet effective self-debiasing framework for transformer models","authors":"Xiaoyue Wang, Xin Liu, Lijie Wang, Suhang Wu, Jinsong Su, Hua Wu","doi":"10.1016/j.artint.2024.104258","DOIUrl":"https://doi.org/10.1016/j.artint.2024.104258","url":null,"abstract":"Current Transformer-based natural language understanding (NLU) models heavily rely on dataset biases, while failing to handle real-world out-of-distribution (OOD) instances. Many methods have been proposed to deal with this issue, but they ignore the fact that the features learned in different layers of Transformer-based NLU models are different. In this paper, we first conduct preliminary studies to obtain two conclusions: 1) both low- and high-layer sentence representations encode common biased features during training; 2) the low-layer sentence representations encode fewer unbiased features than the high-layer ones. Based on these conclusions, we propose a simple yet effective self-debiasing framework for Transformer-based NLU models. Concretely, we first stack a classifier on a selected low layer. Then, we introduce a residual connection that feeds the low-layer sentence representation to the top-layer classifier. In this way, the top-layer sentence representation will be trained to ignore the common biased features encoded by the low-layer sentence representation and focus on task-relevant unbiased features. During inference, we remove the residual connection and directly use the top-layer sentence representation to make predictions. Extensive experiments and in-depth analyses on NLU tasks demonstrate the superiority of our framework, achieving a new state-of-the-art (SOTA) on three OOD test sets.","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"27 1","pages":""},"PeriodicalIF":14.4,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142788883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Kripke-Lewis semantics for belief update and belief revision
IF 14.4 2区 计算机科学
Artificial Intelligence Pub Date : 2024-11-29 DOI: 10.1016/j.artint.2024.104259
Giacomo Bonanno
{"title":"A Kripke-Lewis semantics for belief update and belief revision","authors":"Giacomo Bonanno","doi":"10.1016/j.artint.2024.104259","DOIUrl":"https://doi.org/10.1016/j.artint.2024.104259","url":null,"abstract":"We provide a new characterization of both belief update and belief revision in terms of a Kripke-Lewis semantics. We consider frames consisting of a set of states, a Kripke belief relation and a Lewis selection function. Adding a valuation to a frame yields a model. Given a model and a state, we identify the initial belief set <ce:italic>K</ce:italic> with the set of formulas that are believed at that state and we identify either the updated belief set <mml:math altimg=\"si1.svg\"><mml:mi>K</mml:mi><mml:mo>⋄</mml:mo><mml:mi>ϕ</mml:mi></mml:math> or the revised belief set <mml:math altimg=\"si2.svg\"><mml:mi>K</mml:mi><mml:mo>⁎</mml:mo><mml:mi>ϕ</mml:mi></mml:math> (prompted by the input represented by formula <ce:italic>ϕ</ce:italic>) as the set of formulas that are the consequent of conditionals that (1) are believed at that state and (2) have <ce:italic>ϕ</ce:italic> as antecedent. We show that this class of models characterizes both the Katsuno-Mendelzon (KM) belief update functions and the Alchourrón, Gärdenfors and Makinson (AGM) belief revision functions, in the following sense: (1) each model gives rise to a partial belief function that can be completed into a full KM/AGM update/revision function, and (2) for every KM/AGM update/revision function there is a model whose associated belief function coincides with it. The difference between update and revision can be reduced to two semantic properties that appear in a stronger form in revision relative to update, thus confirming the finding by Peppas et al. (1996) <ce:cross-ref ref>[30]</ce:cross-ref> that, “for a fixed theory <ce:italic>K</ce:italic>, revising <ce:italic>K</ce:italic> is much the same as updating <ce:italic>K</ce:italic>”. It is argued that the proposed semantic characterization brings into question the common interpretation of belief revision and update as change in beliefs in response to new information.","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"28 1","pages":""},"PeriodicalIF":14.4,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142788884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Defying catastrophic forgetting via influence function
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-11-27 DOI: 10.1016/j.artint.2024.104261
Rui Gao, Weiwei Liu
{"title":"Defying catastrophic forgetting via influence function","authors":"Rui Gao,&nbsp;Weiwei Liu","doi":"10.1016/j.artint.2024.104261","DOIUrl":"10.1016/j.artint.2024.104261","url":null,"abstract":"<div><div>Deep-learning models need to continually accumulate knowledge from tasks, given that the number of tasks are increasing overwhelmingly as the digital world evolves. However, standard deep-learning models are prone to forgetting about previously acquired skills when learning new ones. Fortunately, this catastrophic forgetting problem can be solved by means of continual learning. One popular approach in this vein is regularization-based method which penalizes parameters by giving their importance. However, a formal definition of parameter importance and theoretical analysis of regularization-based methods are elements that remain under-explored. In this paper, we first rigorously define the parameter importance by influence function, then unify the seminal methods (i.e., EWC, SI and MAS) into one whole framework. Two key theoretical results are presented in this work, and extensive experiments are conducted on standard benchmarks, which verify the superior performance of our proposed method.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"339 ","pages":"Article 104261"},"PeriodicalIF":5.1,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142744367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating symbolic reasoning into neural generative models for design generation
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-11-19 DOI: 10.1016/j.artint.2024.104257
Maxwell J. Jacobson, Yexiang Xue
{"title":"Integrating symbolic reasoning into neural generative models for design generation","authors":"Maxwell J. Jacobson,&nbsp;Yexiang Xue","doi":"10.1016/j.artint.2024.104257","DOIUrl":"10.1016/j.artint.2024.104257","url":null,"abstract":"<div><div>Design generation requires tight integration of neural and symbolic reasoning, as good design must meet explicit user needs and honor implicit rules for aesthetics, utility, and convenience. Current automated design tools driven by neural networks produce appealing designs, but cannot satisfy user specifications and utility requirements. Symbolic reasoning tools, such as constraint programming, cannot perceive low-level visual information in images or capture subtle aspects such as aesthetics. We introduce the Spatial Reasoning Integrated Generator (SPRING) for design generation. SPRING embeds a neural and symbolic integrated spatial reasoning module inside the deep generative network. The spatial reasoning module samples the set of locations of objects to be generated from a backtrack-free distribution. This distribution modifies the implicit preference distribution, which is learned by a recurrent neural network to capture utility and aesthetics. The sampling from the backtrack-free distribution is accomplished by a symbolic reasoning approach, SampleSearch, which zeros out the probability of sampling spatial locations violating explicit user specifications. Embedding symbolic reasoning into neural generation guarantees that the output of SPRING satisfies user requirements. Furthermore, SPRING offers interpretability, allowing users to visualize and diagnose the generation process through the bounding boxes. SPRING is also adept at managing novel user specifications not encountered during its training, thanks to its proficiency in zero-shot constraint transfer. Quantitative evaluations and a human study reveal that SPRING outperforms baseline generative models, excelling in delivering high design quality and better meeting user specifications.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"339 ","pages":"Article 104257"},"PeriodicalIF":5.1,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142744366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lifted action models learning from partial traces 从部分轨迹学习的提升行动模型
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-11-15 DOI: 10.1016/j.artint.2024.104256
Leonardo Lamanna , Luciano Serafini , Alessandro Saetti , Alfonso Emilio Gerevini , Paolo Traverso
{"title":"Lifted action models learning from partial traces","authors":"Leonardo Lamanna ,&nbsp;Luciano Serafini ,&nbsp;Alessandro Saetti ,&nbsp;Alfonso Emilio Gerevini ,&nbsp;Paolo Traverso","doi":"10.1016/j.artint.2024.104256","DOIUrl":"10.1016/j.artint.2024.104256","url":null,"abstract":"<div><div>For applying symbolic planning, there is the necessity of providing the specification of a symbolic action model, which is usually manually specified by a domain expert. However, such an encoding may be faulty due to either human errors or lack of domain knowledge. Therefore, learning the symbolic action model in an automated way has been widely adopted as an alternative to its manual specification. In this paper, we focus on the problem of learning action models offline, from an input set of partially observable plan traces. In particular, we propose an approach to: <em>(i)</em> augment the observability of a given plan trace by applying predefined logical rules; <em>(ii)</em> learn the preconditions and effects of each action in a plan trace from partial observations before and after the action execution. We formally prove that our approach learns action models with fundamental theoretical properties, not provided by other methods. We experimentally show that our approach outperforms a state-of-the-art method on a large set of existing benchmark domains. Furthermore, we compare the effectiveness of the learned action models for solving planning problems and show that the action models learned by our approach are much more effective w.r.t. a state-of-the-art method.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"339 ","pages":"Article 104256"},"PeriodicalIF":5.1,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142643211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-AI coevolution 人类与人工智能的共同进化
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2024-11-13 DOI: 10.1016/j.artint.2024.104244
Dino Pedreschi , Luca Pappalardo , Emanuele Ferragina , Ricardo Baeza-Yates , Albert-László Barabási , Frank Dignum , Virginia Dignum , Tina Eliassi-Rad , Fosca Giannotti , János Kertész , Alistair Knott , Yannis Ioannidis , Paul Lukowicz , Andrea Passarella , Alex Sandy Pentland , John Shawe-Taylor , Alessandro Vespignani
{"title":"Human-AI coevolution","authors":"Dino Pedreschi ,&nbsp;Luca Pappalardo ,&nbsp;Emanuele Ferragina ,&nbsp;Ricardo Baeza-Yates ,&nbsp;Albert-László Barabási ,&nbsp;Frank Dignum ,&nbsp;Virginia Dignum ,&nbsp;Tina Eliassi-Rad ,&nbsp;Fosca Giannotti ,&nbsp;János Kertész ,&nbsp;Alistair Knott ,&nbsp;Yannis Ioannidis ,&nbsp;Paul Lukowicz ,&nbsp;Andrea Passarella ,&nbsp;Alex Sandy Pentland ,&nbsp;John Shawe-Taylor ,&nbsp;Alessandro Vespignani","doi":"10.1016/j.artint.2024.104244","DOIUrl":"10.1016/j.artint.2024.104244","url":null,"abstract":"<div><div>Human-AI coevolution, defined as a process in which humans and AI algorithms continuously influence each other, increasingly characterises our society, but is understudied in artificial intelligence and complexity science literature. Recommender systems and assistants play a prominent role in human-AI coevolution, as they permeate many facets of daily life and influence human choices through online platforms. The interaction between users and AI results in a potentially endless feedback loop, wherein users' choices generate data to train AI models, which, in turn, shape subsequent user preferences. This human-AI feedback loop has peculiar characteristics compared to traditional human-machine interaction and gives rise to complex and often “unintended” systemic outcomes. This paper introduces human-AI coevolution as the cornerstone for a new field of study at the intersection between AI and complexity science focused on the theoretical, empirical, and mathematical investigation of the human-AI feedback loop. In doing so, we: <em>(i)</em> outline the pros and cons of existing methodologies and highlight shortcomings and potential ways for capturing feedback loop mechanisms; <em>(ii)</em> propose a reflection at the intersection between complexity science, AI and society; <em>(iii)</em> provide real-world examples for different human-AI ecosystems; and <em>(iv)</em> illustrate challenges to the creation of such a field of study, conceptualising them at increasing levels of abstraction, i.e., scientific, legal and socio-political.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"339 ","pages":"Article 104244"},"PeriodicalIF":5.1,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142643212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信