Ai Magazine最新文献

筛选
英文 中文
What is reproducibility in artificial intelligence and machine learning research?
IF 2.5 4区 计算机科学
Ai Magazine Pub Date : 2025-04-18 DOI: 10.1002/aaai.70004
Abhyuday Desai, Mohamed Abdelhamid, Nakul R. Padalkar
{"title":"What is reproducibility in artificial intelligence and machine learning research?","authors":"Abhyuday Desai,&nbsp;Mohamed Abdelhamid,&nbsp;Nakul R. Padalkar","doi":"10.1002/aaai.70004","DOIUrl":"https://doi.org/10.1002/aaai.70004","url":null,"abstract":"<p>In the rapidly evolving fields of artificial intelligence (AI) and machine learning (ML), the reproducibility crisis underscores the urgent need for clear validation methodologies to maintain scientific integrity and encourage advancement. The crisis is compounded by the prevalent confusion over validation terminology. In response to this challenge, we introduce a framework that clarifies the roles and definitions of key validation efforts: repeatability, dependent and independent reproducibility, and direct and conceptual replicability. This structured framework aims to provide AI/ML researchers with the necessary clarity on these essential concepts, facilitating the appropriate design, conduct, and interpretation of validation studies. By articulating the nuances and specific roles of each type of validation study, we aim to enhance the reliability and trustworthiness of research findings and support the community's efforts to address reproducibility challenges effectively.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open issues in open world learning
IF 2.5 4区 计算机科学
Ai Magazine Pub Date : 2025-04-14 DOI: 10.1002/aaai.70001
Steve Cruz, Katarina Doctor, Christopher Funk, Walter Scheirer
{"title":"Open issues in open world learning","authors":"Steve Cruz,&nbsp;Katarina Doctor,&nbsp;Christopher Funk,&nbsp;Walter Scheirer","doi":"10.1002/aaai.70001","DOIUrl":"https://doi.org/10.1002/aaai.70001","url":null,"abstract":"<p>Meaningful progress has been made in open world learning (OWL), enhancing the ability of agents to detect, characterize, and incrementally learn novelty in dynamic environments. However, novelty remains a persistent challenge for agents relying on state-of-the-art learning algorithms. This article considers the current state of OWL, drawing on insights from a recent DARPA research program on this topic. We identify open issues that impede further advancements spanning theory, design, and evaluation. In particular, we emphasize the challenges posed by dynamic scenarios that are crucial to understand for ensuring the viability of agents designed for real-world environments. The article provides suggestions for setting a new research agenda that effectively addresses these open issues.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143831393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reproducibility in machine-learning-based research: Overview, barriers, and drivers 基于机器学习的研究的可重复性:概述、障碍和驱动因素
IF 2.5 4区 计算机科学
Ai Magazine Pub Date : 2025-04-14 DOI: 10.1002/aaai.70002
Harald Semmelrock, Tony Ross-Hellauer, Simone Kopeinik, Dieter Theiler, Armin Haberl, Stefan Thalmann, Dominik Kowald
{"title":"Reproducibility in machine-learning-based research: Overview, barriers, and drivers","authors":"Harald Semmelrock,&nbsp;Tony Ross-Hellauer,&nbsp;Simone Kopeinik,&nbsp;Dieter Theiler,&nbsp;Armin Haberl,&nbsp;Stefan Thalmann,&nbsp;Dominik Kowald","doi":"10.1002/aaai.70002","DOIUrl":"https://doi.org/10.1002/aaai.70002","url":null,"abstract":"<p>Many research fields are currently reckoning with issues of poor levels of reproducibility. Some label it a “crisis,” and research employing or building machine learning (ML) models is no exception. Issues including lack of transparency, data or code, poor adherence to standards, and the sensitivity of ML training conditions mean that many papers are not even reproducible in principle. Where they are, though, reproducibility experiments have found worryingly low degrees of similarity with original results. Despite previous appeals from ML researchers on this topic and various initiatives from conference reproducibility tracks to the ACM's new Emerging Interest Group on Reproducibility and Replicability, we contend that the general community continues to take this issue too lightly. Poor reproducibility threatens trust in and integrity of research results. Therefore, in this article, we lay out a new perspective on the key barriers and drivers (both procedural and technical) to increased reproducibility at various levels (methods, code, data, and experiments). We then map the drivers to the barriers to give concrete advice for strategies for researchers to mitigate reproducibility issues in their own work, to lay out key areas where further research is needed in specific areas, and to further ignite discussion on the threat presented by these urgent issues.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143831231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PADTHAI-MM: Principles-based approach for designing trustworthy, human-centered AI using the MAST methodology
IF 2.5 4区 计算机科学
Ai Magazine Pub Date : 2025-03-18 DOI: 10.1002/aaai.70000
Myke C. Cohen, Nayoung Kim, Yang Ba, Anna Pan, Shawaiz Bhatti, Pouria Salehi, James Sung, Erik Blasch, Mickey V. Mancenido, Erin K. Chiou
{"title":"PADTHAI-MM: Principles-based approach for designing trustworthy, human-centered AI using the MAST methodology","authors":"Myke C. Cohen,&nbsp;Nayoung Kim,&nbsp;Yang Ba,&nbsp;Anna Pan,&nbsp;Shawaiz Bhatti,&nbsp;Pouria Salehi,&nbsp;James Sung,&nbsp;Erik Blasch,&nbsp;Mickey V. Mancenido,&nbsp;Erin K. Chiou","doi":"10.1002/aaai.70000","DOIUrl":"https://doi.org/10.1002/aaai.70000","url":null,"abstract":"<p>Despite an extensive body of literature on trust in technology, designing trustworthy AI systems for high-stakes decision domains remains a significant challenge. Widely used system design guidelines and tools are rarely attuned to domain-specific trustworthiness principles. In this study, we introduce a design framework to address this gap within intelligence analytic tasks, called the Principles-based Approach for Designing Trustworthy, Human-centered AI using the MAST Methodology (PADTHAI-MM). PADTHAI-MM builds on the Multisource AI Scorecard Table (MAST), an AI decision support system evaluation tool designed in accordance to the U.S. Intelligence Community's standards for system trustworthiness. We demonstrate PADTHAI-MM in our development of the Reporting Assistant for Defense and Intelligence Tasks (READIT), a research platform that leverages data visualizations and natural language processing-based text analysis to emulate AI-enabled intelligence reporting aids. To empirically assess the efficacy of PADTHAI-MM, we developed two versions of READIT for comparison: a “High-MAST” version, which incorporates AI contextual information and explanations, and a “Low-MAST” version, designed to be akin to inscrutable “black box” AI systems. Through an iterative design process guided by stakeholder feedback, our multidisciplinary design team developed prototypes that were evaluated by experienced intelligence analysts. Results substantially supported the viability of PADTHAI-MM in designing for system trustworthiness in this task domain. We also explored the relationship between analysts' MAST ratings and three theoretical categories of information known to impact trust: <i>process</i>, <i>purpose</i>, and <i>performance</i>. Overall, our study supports the practical and theoretical viability of PADTHAI-MM as an approach to designing trustable AI systems.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70000","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143638790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What AIs are not learning (and why)
IF 2.5 4区 计算机科学
Ai Magazine Pub Date : 2025-03-03 DOI: 10.1002/aaai.12213
Mark Stefik
{"title":"What AIs are not learning (and why)","authors":"Mark Stefik","doi":"10.1002/aaai.12213","DOIUrl":"https://doi.org/10.1002/aaai.12213","url":null,"abstract":"<p>Today's robots do not yet learn the general skills that are necessary to provide home care, to be nursing assistants, to interact with people, or do household chores nearly as well as people do. Addressing the aspirational goal of creating service robots requires improving how they are created. Today's mainstream AIs are not created by agents learning from experiences doing tasks in real-world contexts and interacting with people. Today's robots do not learn by sensing, acting, doing experiments, and collaborating. Future robots will need to learn from such experiences in order to be ready for robust deployment in human service applications. This paper investigates what aspirational future autonomous human-compatible service robots will need to know. It recommends developing <i>experiential</i> (robotic) foundation models (FMs) for bootstrapping them.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12213","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairness amidst non-IID graph data: A literature review
IF 2.5 4区 计算机科学
Ai Magazine Pub Date : 2025-01-28 DOI: 10.1002/aaai.12212
Wenbin Zhang, Shuigeng Zhou, Toby Walsh, Jeremy C. Weiss
{"title":"Fairness amidst non-IID graph data: A literature review","authors":"Wenbin Zhang,&nbsp;Shuigeng Zhou,&nbsp;Toby Walsh,&nbsp;Jeremy C. Weiss","doi":"10.1002/aaai.12212","DOIUrl":"https://doi.org/10.1002/aaai.12212","url":null,"abstract":"<p>The growing importance of understanding and addressing algorithmic bias in artificial intelligence (AI) has led to a surge in research on AI fairness, which often assumes that the underlying data are independent and identically distributed (IID). However, real-world data frequently exist in non-IID graph structures that capture connections among individual units. To effectively mitigate bias in AI systems, it is essential to bridge the gap between traditional fairness literature, designed for IID data, and the prevalence of non-IID graph data. This survey reviews recent advancements in fairness amidst non-IID graph data, including the newly introduced fair graph generation and the commonly studied fair graph classification. In addition, available datasets and evaluation metrics for future research are identified, the limitations of existing work are highlighted, and promising future directions are proposed.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12212","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond scaleup: Knowledge-aware parsimony learning from deep networks
IF 2.5 4区 计算机科学
Ai Magazine Pub Date : 2025-01-28 DOI: 10.1002/aaai.12211
Quanming Yao, Yongqi Zhang, Yaqing Wang, Nan Yin, James Kwok, Qiang Yang
{"title":"Beyond scaleup: Knowledge-aware parsimony learning from deep networks","authors":"Quanming Yao,&nbsp;Yongqi Zhang,&nbsp;Yaqing Wang,&nbsp;Nan Yin,&nbsp;James Kwok,&nbsp;Qiang Yang","doi":"10.1002/aaai.12211","DOIUrl":"https://doi.org/10.1002/aaai.12211","url":null,"abstract":"<p>The brute-force scaleup of training datasets, learnable parameters and computation power, has become a prevalent strategy for developing more robust learning models. However, due to bottlenecks in data, computation, and trust, the sustainability of this strategy is a serious concern. In this paper, we attempt to address this issue in a parsimonious manner (i.e., achieving greater potential with simpler models). The key is to drive models using domain-specific knowledge, such as symbols, logic, and formulas, instead of purely relying on scaleup. This approach allows us to build a framework that uses this knowledge as “building blocks” to achieve parsimony in model design, training, and interpretation. Empirical results show that our methods surpass those that typically follow the scaling law. We also demonstrate our framework in AI for science, specifically in the problem of drug-drug interaction prediction. We hope our research can foster more diverse technical roadmaps in the era of foundation models.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12211","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The role and significance of state-building as ensuring national security in the context of artificial intelligence development
IF 2.5 4区 计算机科学
Ai Magazine Pub Date : 2025-01-10 DOI: 10.1002/aaai.12207
Vitaliy Gumenyuk, Anatolii Nikitin, Oleksandr Bondar, Iaroslav Zhydovtsev, Hanna Yermakova
{"title":"The role and significance of state-building as ensuring national security in the context of artificial intelligence development","authors":"Vitaliy Gumenyuk,&nbsp;Anatolii Nikitin,&nbsp;Oleksandr Bondar,&nbsp;Iaroslav Zhydovtsev,&nbsp;Hanna Yermakova","doi":"10.1002/aaai.12207","DOIUrl":"https://doi.org/10.1002/aaai.12207","url":null,"abstract":"<p>Artificial intelligence (AI) has emerged as a major technology and represents a fundamental and revolutionary innovation of our time that has the potential to significantly change the global scenario. In the context of further development of artificial intelligence, state establishment plays a central role in ensuring national security. Countries are tasked with developing legal frameworks for the development and application of AI. Additionally, governments should commit resources to AI research and development to ensure access to cutting-edge technology. As AI continues to evolve, nation-building remains crucial for the protection of national security. Countries must shoulder the responsibility of establishing legal structures to supervise the progression and implementation of artificial intelligence. Investing in AI research and development is essential to secure access to cutting-edge technology. Gracious society and open engagement apply critical impact on forming AI approaches. Civic organizations can contribute to expanding open mindfulness of the related dangers and openings of AI, guaranteeing straightforwardness and responsibility in legislative activities, and pushing for the creation of capable AI approaches. Open interest can help governments in comprehending the yearnings of citizens with respect to AI approaches. This study explores the role and importance of nation-building in ensuring national security in the context of the development of artificial intelligence. It also examines how civil society and public participation can effectively shape AI policy. The topic offers diverse research and analytical opportunities that enable a deeper understanding of the interactions and mutual influences between statehood and artificial intelligence in the context of ensuring national security. It examines the potential and threats that artificial intelligence poses to national security and considers strategies that countries can adopt to ensure security in this area. Based on the research findings, recommendations and suggestions are made for governments and civil society to improve the effectiveness of public participation in formulating AI policies.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12207","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of security and privacy issues of machine unlearning
IF 2.5 4区 计算机科学
Ai Magazine Pub Date : 2025-01-10 DOI: 10.1002/aaai.12209
Aobo Chen, Yangyi Li, Chenxu Zhao, Mengdi Huai
{"title":"A survey of security and privacy issues of machine unlearning","authors":"Aobo Chen,&nbsp;Yangyi Li,&nbsp;Chenxu Zhao,&nbsp;Mengdi Huai","doi":"10.1002/aaai.12209","DOIUrl":"https://doi.org/10.1002/aaai.12209","url":null,"abstract":"<p>Machine unlearning is a cutting-edge technology that embodies the privacy legal principle of the right to be forgotten within the realm of machine learning (ML). It aims to remove specific data or knowledge from trained models without retraining from scratch and has gained significant attention in the field of artificial intelligence in recent years. However, the development of machine unlearning research is associated with inherent vulnerabilities and threats, posing significant challenges for researchers and practitioners. In this article, we provide the first comprehensive survey of security and privacy issues associated with machine unlearning by providing a systematic classification across different levels and criteria. Specifically, we begin by investigating unlearning-based security attacks, where adversaries exploit vulnerabilities in the unlearning process to compromise the security of machine learning (ML) models. We then conduct a thorough examination of privacy risks associated with the adoption of machine unlearning. Additionally, we explore existing countermeasures and mitigation strategies designed to protect models from malicious unlearning-based attacks targeting both security and privacy. Further, we provide a detailed comparison between machine unlearning-based security and privacy attacks and traditional malicious attacks. Finally, we discuss promising future research directions for security and privacy issues posed by machine unlearning, offering insights into potential solutions and advancements in this evolving field.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12209","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometric Machine Learning
IF 2.5 4区 计算机科学
Ai Magazine Pub Date : 2025-01-10 DOI: 10.1002/aaai.12210
Melanie Weber
{"title":"Geometric Machine Learning","authors":"Melanie Weber","doi":"10.1002/aaai.12210","DOIUrl":"https://doi.org/10.1002/aaai.12210","url":null,"abstract":"<p>A cornerstone of machine learning is the identification and exploitation of structure in high-dimensional data. While classical approaches assume that data lies in a high-dimensional Euclidean space, <i>geometric machine learning</i> methods are designed for non-Euclidean data, including graphs, strings, and matrices, or data characterized by symmetries inherent in the underlying system. In this article, we review geometric approaches for uncovering and leveraging structure in data and how an understanding of data geometry can lead to the development of more effective machine learning algorithms with provable guarantees.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12210","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信