Ai MagazinePub Date : 2025-01-28DOI: 10.1002/aaai.12212
Wenbin Zhang, Shuigeng Zhou, Toby Walsh, Jeremy C. Weiss
{"title":"Fairness amidst non-IID graph data: A literature review","authors":"Wenbin Zhang, Shuigeng Zhou, Toby Walsh, Jeremy C. Weiss","doi":"10.1002/aaai.12212","DOIUrl":"https://doi.org/10.1002/aaai.12212","url":null,"abstract":"<p>The growing importance of understanding and addressing algorithmic bias in artificial intelligence (AI) has led to a surge in research on AI fairness, which often assumes that the underlying data are independent and identically distributed (IID). However, real-world data frequently exist in non-IID graph structures that capture connections among individual units. To effectively mitigate bias in AI systems, it is essential to bridge the gap between traditional fairness literature, designed for IID data, and the prevalence of non-IID graph data. This survey reviews recent advancements in fairness amidst non-IID graph data, including the newly introduced fair graph generation and the commonly studied fair graph classification. In addition, available datasets and evaluation metrics for future research are identified, the limitations of existing work are highlighted, and promising future directions are proposed.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12212","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-01-28DOI: 10.1002/aaai.12211
Quanming Yao, Yongqi Zhang, Yaqing Wang, Nan Yin, James Kwok, Qiang Yang
{"title":"Beyond scaleup: Knowledge-aware parsimony learning from deep networks","authors":"Quanming Yao, Yongqi Zhang, Yaqing Wang, Nan Yin, James Kwok, Qiang Yang","doi":"10.1002/aaai.12211","DOIUrl":"https://doi.org/10.1002/aaai.12211","url":null,"abstract":"<p>The brute-force scaleup of training datasets, learnable parameters and computation power, has become a prevalent strategy for developing more robust learning models. However, due to bottlenecks in data, computation, and trust, the sustainability of this strategy is a serious concern. In this paper, we attempt to address this issue in a parsimonious manner (i.e., achieving greater potential with simpler models). The key is to drive models using domain-specific knowledge, such as symbols, logic, and formulas, instead of purely relying on scaleup. This approach allows us to build a framework that uses this knowledge as “building blocks” to achieve parsimony in model design, training, and interpretation. Empirical results show that our methods surpass those that typically follow the scaling law. We also demonstrate our framework in AI for science, specifically in the problem of drug-drug interaction prediction. We hope our research can foster more diverse technical roadmaps in the era of foundation models.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12211","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-01-10DOI: 10.1002/aaai.12207
Vitaliy Gumenyuk, Anatolii Nikitin, Oleksandr Bondar, Iaroslav Zhydovtsev, Hanna Yermakova
{"title":"The role and significance of state-building as ensuring national security in the context of artificial intelligence development","authors":"Vitaliy Gumenyuk, Anatolii Nikitin, Oleksandr Bondar, Iaroslav Zhydovtsev, Hanna Yermakova","doi":"10.1002/aaai.12207","DOIUrl":"https://doi.org/10.1002/aaai.12207","url":null,"abstract":"<p>Artificial intelligence (AI) has emerged as a major technology and represents a fundamental and revolutionary innovation of our time that has the potential to significantly change the global scenario. In the context of further development of artificial intelligence, state establishment plays a central role in ensuring national security. Countries are tasked with developing legal frameworks for the development and application of AI. Additionally, governments should commit resources to AI research and development to ensure access to cutting-edge technology. As AI continues to evolve, nation-building remains crucial for the protection of national security. Countries must shoulder the responsibility of establishing legal structures to supervise the progression and implementation of artificial intelligence. Investing in AI research and development is essential to secure access to cutting-edge technology. Gracious society and open engagement apply critical impact on forming AI approaches. Civic organizations can contribute to expanding open mindfulness of the related dangers and openings of AI, guaranteeing straightforwardness and responsibility in legislative activities, and pushing for the creation of capable AI approaches. Open interest can help governments in comprehending the yearnings of citizens with respect to AI approaches. This study explores the role and importance of nation-building in ensuring national security in the context of the development of artificial intelligence. It also examines how civil society and public participation can effectively shape AI policy. The topic offers diverse research and analytical opportunities that enable a deeper understanding of the interactions and mutual influences between statehood and artificial intelligence in the context of ensuring national security. It examines the potential and threats that artificial intelligence poses to national security and considers strategies that countries can adopt to ensure security in this area. Based on the research findings, recommendations and suggestions are made for governments and civil society to improve the effectiveness of public participation in formulating AI policies.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12207","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-01-10DOI: 10.1002/aaai.12209
Aobo Chen, Yangyi Li, Chenxu Zhao, Mengdi Huai
{"title":"A survey of security and privacy issues of machine unlearning","authors":"Aobo Chen, Yangyi Li, Chenxu Zhao, Mengdi Huai","doi":"10.1002/aaai.12209","DOIUrl":"https://doi.org/10.1002/aaai.12209","url":null,"abstract":"<p>Machine unlearning is a cutting-edge technology that embodies the privacy legal principle of the right to be forgotten within the realm of machine learning (ML). It aims to remove specific data or knowledge from trained models without retraining from scratch and has gained significant attention in the field of artificial intelligence in recent years. However, the development of machine unlearning research is associated with inherent vulnerabilities and threats, posing significant challenges for researchers and practitioners. In this article, we provide the first comprehensive survey of security and privacy issues associated with machine unlearning by providing a systematic classification across different levels and criteria. Specifically, we begin by investigating unlearning-based security attacks, where adversaries exploit vulnerabilities in the unlearning process to compromise the security of machine learning (ML) models. We then conduct a thorough examination of privacy risks associated with the adoption of machine unlearning. Additionally, we explore existing countermeasures and mitigation strategies designed to protect models from malicious unlearning-based attacks targeting both security and privacy. Further, we provide a detailed comparison between machine unlearning-based security and privacy attacks and traditional malicious attacks. Finally, we discuss promising future research directions for security and privacy issues posed by machine unlearning, offering insights into potential solutions and advancements in this evolving field.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12209","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-01-10DOI: 10.1002/aaai.12210
Melanie Weber
{"title":"Geometric Machine Learning","authors":"Melanie Weber","doi":"10.1002/aaai.12210","DOIUrl":"https://doi.org/10.1002/aaai.12210","url":null,"abstract":"<p>A cornerstone of machine learning is the identification and exploitation of structure in high-dimensional data. While classical approaches assume that data lies in a high-dimensional Euclidean space, <i>geometric machine learning</i> methods are designed for non-Euclidean data, including graphs, strings, and matrices, or data characterized by symmetries inherent in the underlying system. In this article, we review geometric approaches for uncovering and leveraging structure in data and how an understanding of data geometry can lead to the development of more effective machine learning algorithms with provable guarantees.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12210","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-01-08DOI: 10.1002/aaai.12208
Toluwani Aremu, Oluwakemi Akinwehinmi, Chukwuemeka Nwagu, Syed Ishtiaque Ahmed, Rita Orji, Pedro Arnau Del Amo, Abdulmotaleb El Saddik
{"title":"On the reliability of Large Language Models to misinformed and demographically informed prompts","authors":"Toluwani Aremu, Oluwakemi Akinwehinmi, Chukwuemeka Nwagu, Syed Ishtiaque Ahmed, Rita Orji, Pedro Arnau Del Amo, Abdulmotaleb El Saddik","doi":"10.1002/aaai.12208","DOIUrl":"https://doi.org/10.1002/aaai.12208","url":null,"abstract":"<p>We investigate and observe the behavior and performance of Large Language Model (LLM)-backed chatbots in addressing misinformed prompts and questions with demographic information within the domains of Climate Change and Mental Health. Through a combination of quantitative and qualitative methods, we assess the chatbots' ability to discern the veracity of statements, their adherence to facts, and the presence of bias or misinformation in their responses. Our quantitative analysis using True/False questions reveals that these chatbots can be relied on to give the right answers to these close-ended questions. However, the qualitative insights, gathered from domain experts, shows that there are still concerns regarding privacy, ethical implications, and the necessity for chatbots to direct users to professional services. We conclude that while these chatbots hold significant promise, their deployment in sensitive areas necessitates careful consideration, ethical oversight, and rigorous refinement to ensure they serve as a beneficial augmentation to human expertise rather than an autonomous solution. Dataset and assessment information can be found at https://github.com/tolusophy/Edge-of-Tomorrow.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12208","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Leveraging AI to improve health information access in the World's largest maternal mobile health program","authors":"Shresth Verma, Arshika Lalan, Paula Rodriguez Diaz, Panayiotis Danassis, Amrita Mahale, Kumar Madhu Sudan, Aparna Hegde, Milind Tambe, Aparna Taneja","doi":"10.1002/aaai.12206","DOIUrl":"https://doi.org/10.1002/aaai.12206","url":null,"abstract":"<p>Harnessing the wide-spread availability of cell phones, many nonprofits have launched mobile health (mHealth) programs to deliver information via voice or text to beneficiaries in underserved communities, with maternal and infant health being a key area of such mHealth programs. Unfortunately, dwindling listenership is a major challenge, requiring targeted interventions using limited resources. This paper focuses on Kilkari, the world's largest mHealth program for maternal and child care – with over 3 million active subscribers at a time – launched by India's Ministry of Health and Family Welfare (MoHFW) and run by the non-profit ARMMAN. We present a system called CHAHAK that aims to reduce automated dropouts as well as boost engagement with the program through the strategic allocation of interventions to beneficiaries. Past work in a similar domain has focused on a much smaller scale mHealth program and used markovian restless multiarmed bandits to optimize a single limited intervention resource. However, this paper demonstrates the challenges in adopting a markovian approach in Kilkari; therefore, CHAHAK instead relies on non-markovian time-series restless bandits and optimizes multiple interventions to improve listenership. We use real Kilkari data from the Odisha state in India to show CHAHAK's effectiveness in harnessing multiple interventions to boost listenership, benefiting marginalized communities. When deployed CHAHAK will assist the largest maternal mHealth program to date.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"526-536"},"PeriodicalIF":2.5,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12206","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2024-11-27DOI: 10.1002/aaai.12205
Alexander Wong, Yuhao Chen, Jan Seyler
{"title":"Introduction to the special issue on Innovative Applications of Artificial Intelligence (IAAI 2024)","authors":"Alexander Wong, Yuhao Chen, Jan Seyler","doi":"10.1002/aaai.12205","DOIUrl":"https://doi.org/10.1002/aaai.12205","url":null,"abstract":"<p>This special issue of <i>AI Magazine</i> covers select applications from the Innovative Applications of Artificial Intelligence (IAAI) conference held in 2024 in Vancouver, Canada. The articles address a broad range of very challenging issues and contain great lessons for AI researchers and application developers.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"440-442"},"PeriodicalIF":2.5,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12205","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142851525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2024-10-21DOI: 10.1002/aaai.12204
Ashiqur R. KhudaBukhsh
{"title":"Deceptively simple: An outsider's perspective on natural language processing","authors":"Ashiqur R. KhudaBukhsh","doi":"10.1002/aaai.12204","DOIUrl":"https://doi.org/10.1002/aaai.12204","url":null,"abstract":"<p>This article highlights a collection of ideas with an underlying deceptive simplicity that addresses several practical challenges in computational social science and generative AI safety. These ideas lead to (1) an interpretable and quantifiable framework for political polarization; (2) a language identifier robust to noisy social media text settings; (3) a cross-lingual semantic sampler that harnesses code-switching; and (4) a bias audit framework that uncovers shocking racism, antisemitism, misogyny, and other biases in a wide suite of large language models.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"569-582"},"PeriodicalIF":2.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12204","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI-assisted research collaboration with open data for fair and effective response to call for proposals","authors":"Siva Likitha Valluru, Michael Widener, Biplav Srivastava, Sriraam Natarajan, Sugata Gangopadhyay","doi":"10.1002/aaai.12203","DOIUrl":"https://doi.org/10.1002/aaai.12203","url":null,"abstract":"<p>Building teams and promoting collaboration are two very common business activities. An example of these are seen in the <i>TeamingForFunding</i> problem, where research institutions and researchers are interested to identify collaborative opportunities when applying to funding agencies in response to latter's calls for proposals. We describe a novel <i>deployed</i> system to recommend teams using a variety of Artificial Intelligence (AI) methods, such that (1) each team achieves the highest possible skill coverage that is demanded by the opportunity, and (2) the workload of distributing the opportunities is balanced among the candidate members. We address these questions by extracting skills latent in open data of proposal calls (demand) and researcher profiles (supply), normalizing them using taxonomies, and creating efficient algorithms that match demand to supply. We create teams to maximize goodness along a novel metric balancing short- and long-term objectives. We evaluate our system in two diverse settings in US and India of researchers and proposal calls, at two different time instants about 1 year apart (total 4 settings), to establish generality of our approach, and deploy it at a major US university. We validate the effectiveness of our algorithms (1) quantitatively, by evaluating the recommended teams using a goodness score and find that more informed methods lead to recommendations of smaller number of teams and higher goodness, and (2) qualitatively, by conducting a large-scale user study at a college-wide level, and demonstrate that users overall found the tool very useful and relevant.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"457-471"},"PeriodicalIF":2.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12203","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}