Ai MagazinePub Date : 2025-03-18DOI: 10.1002/aaai.70000
Myke C. Cohen, Nayoung Kim, Yang Ba, Anna Pan, Shawaiz Bhatti, Pouria Salehi, James Sung, Erik Blasch, Mickey V. Mancenido, Erin K. Chiou
{"title":"PADTHAI-MM: Principles-based approach for designing trustworthy, human-centered AI using the MAST methodology","authors":"Myke C. Cohen, Nayoung Kim, Yang Ba, Anna Pan, Shawaiz Bhatti, Pouria Salehi, James Sung, Erik Blasch, Mickey V. Mancenido, Erin K. Chiou","doi":"10.1002/aaai.70000","DOIUrl":"https://doi.org/10.1002/aaai.70000","url":null,"abstract":"<p>Despite an extensive body of literature on trust in technology, designing trustworthy AI systems for high-stakes decision domains remains a significant challenge. Widely used system design guidelines and tools are rarely attuned to domain-specific trustworthiness principles. In this study, we introduce a design framework to address this gap within intelligence analytic tasks, called the Principles-based Approach for Designing Trustworthy, Human-centered AI using the MAST Methodology (PADTHAI-MM). PADTHAI-MM builds on the Multisource AI Scorecard Table (MAST), an AI decision support system evaluation tool designed in accordance to the U.S. Intelligence Community's standards for system trustworthiness. We demonstrate PADTHAI-MM in our development of the Reporting Assistant for Defense and Intelligence Tasks (READIT), a research platform that leverages data visualizations and natural language processing-based text analysis to emulate AI-enabled intelligence reporting aids. To empirically assess the efficacy of PADTHAI-MM, we developed two versions of READIT for comparison: a “High-MAST” version, which incorporates AI contextual information and explanations, and a “Low-MAST” version, designed to be akin to inscrutable “black box” AI systems. Through an iterative design process guided by stakeholder feedback, our multidisciplinary design team developed prototypes that were evaluated by experienced intelligence analysts. Results substantially supported the viability of PADTHAI-MM in designing for system trustworthiness in this task domain. We also explored the relationship between analysts' MAST ratings and three theoretical categories of information known to impact trust: <i>process</i>, <i>purpose</i>, and <i>performance</i>. Overall, our study supports the practical and theoretical viability of PADTHAI-MM as an approach to designing trustable AI systems.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70000","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143638790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-03-03DOI: 10.1002/aaai.12213
Mark Stefik
{"title":"What AIs are not learning (and why)","authors":"Mark Stefik","doi":"10.1002/aaai.12213","DOIUrl":"https://doi.org/10.1002/aaai.12213","url":null,"abstract":"<p>Today's robots do not yet learn the general skills that are necessary to provide home care, to be nursing assistants, to interact with people, or do household chores nearly as well as people do. Addressing the aspirational goal of creating service robots requires improving how they are created. Today's mainstream AIs are not created by agents learning from experiences doing tasks in real-world contexts and interacting with people. Today's robots do not learn by sensing, acting, doing experiments, and collaborating. Future robots will need to learn from such experiences in order to be ready for robust deployment in human service applications. This paper investigates what aspirational future autonomous human-compatible service robots will need to know. It recommends developing <i>experiential</i> (robotic) foundation models (FMs) for bootstrapping them.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12213","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-01-28DOI: 10.1002/aaai.12212
Wenbin Zhang, Shuigeng Zhou, Toby Walsh, Jeremy C. Weiss
{"title":"Fairness amidst non-IID graph data: A literature review","authors":"Wenbin Zhang, Shuigeng Zhou, Toby Walsh, Jeremy C. Weiss","doi":"10.1002/aaai.12212","DOIUrl":"https://doi.org/10.1002/aaai.12212","url":null,"abstract":"<p>The growing importance of understanding and addressing algorithmic bias in artificial intelligence (AI) has led to a surge in research on AI fairness, which often assumes that the underlying data are independent and identically distributed (IID). However, real-world data frequently exist in non-IID graph structures that capture connections among individual units. To effectively mitigate bias in AI systems, it is essential to bridge the gap between traditional fairness literature, designed for IID data, and the prevalence of non-IID graph data. This survey reviews recent advancements in fairness amidst non-IID graph data, including the newly introduced fair graph generation and the commonly studied fair graph classification. In addition, available datasets and evaluation metrics for future research are identified, the limitations of existing work are highlighted, and promising future directions are proposed.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12212","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-01-28DOI: 10.1002/aaai.12211
Quanming Yao, Yongqi Zhang, Yaqing Wang, Nan Yin, James Kwok, Qiang Yang
{"title":"Beyond scaleup: Knowledge-aware parsimony learning from deep networks","authors":"Quanming Yao, Yongqi Zhang, Yaqing Wang, Nan Yin, James Kwok, Qiang Yang","doi":"10.1002/aaai.12211","DOIUrl":"https://doi.org/10.1002/aaai.12211","url":null,"abstract":"<p>The brute-force scaleup of training datasets, learnable parameters and computation power, has become a prevalent strategy for developing more robust learning models. However, due to bottlenecks in data, computation, and trust, the sustainability of this strategy is a serious concern. In this paper, we attempt to address this issue in a parsimonious manner (i.e., achieving greater potential with simpler models). The key is to drive models using domain-specific knowledge, such as symbols, logic, and formulas, instead of purely relying on scaleup. This approach allows us to build a framework that uses this knowledge as “building blocks” to achieve parsimony in model design, training, and interpretation. Empirical results show that our methods surpass those that typically follow the scaling law. We also demonstrate our framework in AI for science, specifically in the problem of drug-drug interaction prediction. We hope our research can foster more diverse technical roadmaps in the era of foundation models.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12211","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-01-10DOI: 10.1002/aaai.12207
Vitaliy Gumenyuk, Anatolii Nikitin, Oleksandr Bondar, Iaroslav Zhydovtsev, Hanna Yermakova
{"title":"The role and significance of state-building as ensuring national security in the context of artificial intelligence development","authors":"Vitaliy Gumenyuk, Anatolii Nikitin, Oleksandr Bondar, Iaroslav Zhydovtsev, Hanna Yermakova","doi":"10.1002/aaai.12207","DOIUrl":"https://doi.org/10.1002/aaai.12207","url":null,"abstract":"<p>Artificial intelligence (AI) has emerged as a major technology and represents a fundamental and revolutionary innovation of our time that has the potential to significantly change the global scenario. In the context of further development of artificial intelligence, state establishment plays a central role in ensuring national security. Countries are tasked with developing legal frameworks for the development and application of AI. Additionally, governments should commit resources to AI research and development to ensure access to cutting-edge technology. As AI continues to evolve, nation-building remains crucial for the protection of national security. Countries must shoulder the responsibility of establishing legal structures to supervise the progression and implementation of artificial intelligence. Investing in AI research and development is essential to secure access to cutting-edge technology. Gracious society and open engagement apply critical impact on forming AI approaches. Civic organizations can contribute to expanding open mindfulness of the related dangers and openings of AI, guaranteeing straightforwardness and responsibility in legislative activities, and pushing for the creation of capable AI approaches. Open interest can help governments in comprehending the yearnings of citizens with respect to AI approaches. This study explores the role and importance of nation-building in ensuring national security in the context of the development of artificial intelligence. It also examines how civil society and public participation can effectively shape AI policy. The topic offers diverse research and analytical opportunities that enable a deeper understanding of the interactions and mutual influences between statehood and artificial intelligence in the context of ensuring national security. It examines the potential and threats that artificial intelligence poses to national security and considers strategies that countries can adopt to ensure security in this area. Based on the research findings, recommendations and suggestions are made for governments and civil society to improve the effectiveness of public participation in formulating AI policies.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12207","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-01-10DOI: 10.1002/aaai.12209
Aobo Chen, Yangyi Li, Chenxu Zhao, Mengdi Huai
{"title":"A survey of security and privacy issues of machine unlearning","authors":"Aobo Chen, Yangyi Li, Chenxu Zhao, Mengdi Huai","doi":"10.1002/aaai.12209","DOIUrl":"https://doi.org/10.1002/aaai.12209","url":null,"abstract":"<p>Machine unlearning is a cutting-edge technology that embodies the privacy legal principle of the right to be forgotten within the realm of machine learning (ML). It aims to remove specific data or knowledge from trained models without retraining from scratch and has gained significant attention in the field of artificial intelligence in recent years. However, the development of machine unlearning research is associated with inherent vulnerabilities and threats, posing significant challenges for researchers and practitioners. In this article, we provide the first comprehensive survey of security and privacy issues associated with machine unlearning by providing a systematic classification across different levels and criteria. Specifically, we begin by investigating unlearning-based security attacks, where adversaries exploit vulnerabilities in the unlearning process to compromise the security of machine learning (ML) models. We then conduct a thorough examination of privacy risks associated with the adoption of machine unlearning. Additionally, we explore existing countermeasures and mitigation strategies designed to protect models from malicious unlearning-based attacks targeting both security and privacy. Further, we provide a detailed comparison between machine unlearning-based security and privacy attacks and traditional malicious attacks. Finally, we discuss promising future research directions for security and privacy issues posed by machine unlearning, offering insights into potential solutions and advancements in this evolving field.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12209","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-01-10DOI: 10.1002/aaai.12210
Melanie Weber
{"title":"Geometric Machine Learning","authors":"Melanie Weber","doi":"10.1002/aaai.12210","DOIUrl":"https://doi.org/10.1002/aaai.12210","url":null,"abstract":"<p>A cornerstone of machine learning is the identification and exploitation of structure in high-dimensional data. While classical approaches assume that data lies in a high-dimensional Euclidean space, <i>geometric machine learning</i> methods are designed for non-Euclidean data, including graphs, strings, and matrices, or data characterized by symmetries inherent in the underlying system. In this article, we review geometric approaches for uncovering and leveraging structure in data and how an understanding of data geometry can lead to the development of more effective machine learning algorithms with provable guarantees.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12210","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-01-08DOI: 10.1002/aaai.12208
Toluwani Aremu, Oluwakemi Akinwehinmi, Chukwuemeka Nwagu, Syed Ishtiaque Ahmed, Rita Orji, Pedro Arnau Del Amo, Abdulmotaleb El Saddik
{"title":"On the reliability of Large Language Models to misinformed and demographically informed prompts","authors":"Toluwani Aremu, Oluwakemi Akinwehinmi, Chukwuemeka Nwagu, Syed Ishtiaque Ahmed, Rita Orji, Pedro Arnau Del Amo, Abdulmotaleb El Saddik","doi":"10.1002/aaai.12208","DOIUrl":"https://doi.org/10.1002/aaai.12208","url":null,"abstract":"<p>We investigate and observe the behavior and performance of Large Language Model (LLM)-backed chatbots in addressing misinformed prompts and questions with demographic information within the domains of Climate Change and Mental Health. Through a combination of quantitative and qualitative methods, we assess the chatbots' ability to discern the veracity of statements, their adherence to facts, and the presence of bias or misinformation in their responses. Our quantitative analysis using True/False questions reveals that these chatbots can be relied on to give the right answers to these close-ended questions. However, the qualitative insights, gathered from domain experts, shows that there are still concerns regarding privacy, ethical implications, and the necessity for chatbots to direct users to professional services. We conclude that while these chatbots hold significant promise, their deployment in sensitive areas necessitates careful consideration, ethical oversight, and rigorous refinement to ensure they serve as a beneficial augmentation to human expertise rather than an autonomous solution. Dataset and assessment information can be found at https://github.com/tolusophy/Edge-of-Tomorrow.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12208","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Leveraging AI to improve health information access in the World's largest maternal mobile health program","authors":"Shresth Verma, Arshika Lalan, Paula Rodriguez Diaz, Panayiotis Danassis, Amrita Mahale, Kumar Madhu Sudan, Aparna Hegde, Milind Tambe, Aparna Taneja","doi":"10.1002/aaai.12206","DOIUrl":"https://doi.org/10.1002/aaai.12206","url":null,"abstract":"<p>Harnessing the wide-spread availability of cell phones, many nonprofits have launched mobile health (mHealth) programs to deliver information via voice or text to beneficiaries in underserved communities, with maternal and infant health being a key area of such mHealth programs. Unfortunately, dwindling listenership is a major challenge, requiring targeted interventions using limited resources. This paper focuses on Kilkari, the world's largest mHealth program for maternal and child care – with over 3 million active subscribers at a time – launched by India's Ministry of Health and Family Welfare (MoHFW) and run by the non-profit ARMMAN. We present a system called CHAHAK that aims to reduce automated dropouts as well as boost engagement with the program through the strategic allocation of interventions to beneficiaries. Past work in a similar domain has focused on a much smaller scale mHealth program and used markovian restless multiarmed bandits to optimize a single limited intervention resource. However, this paper demonstrates the challenges in adopting a markovian approach in Kilkari; therefore, CHAHAK instead relies on non-markovian time-series restless bandits and optimizes multiple interventions to improve listenership. We use real Kilkari data from the Odisha state in India to show CHAHAK's effectiveness in harnessing multiple interventions to boost listenership, benefiting marginalized communities. When deployed CHAHAK will assist the largest maternal mHealth program to date.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"526-536"},"PeriodicalIF":2.5,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12206","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2024-11-27DOI: 10.1002/aaai.12205
Alexander Wong, Yuhao Chen, Jan Seyler
{"title":"Introduction to the special issue on Innovative Applications of Artificial Intelligence (IAAI 2024)","authors":"Alexander Wong, Yuhao Chen, Jan Seyler","doi":"10.1002/aaai.12205","DOIUrl":"https://doi.org/10.1002/aaai.12205","url":null,"abstract":"<p>This special issue of <i>AI Magazine</i> covers select applications from the Innovative Applications of Artificial Intelligence (IAAI) conference held in 2024 in Vancouver, Canada. The articles address a broad range of very challenging issues and contain great lessons for AI researchers and application developers.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"440-442"},"PeriodicalIF":2.5,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12205","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142851525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}