{"title":"Multimodal AI teacher: Integrating edge computing and reasoning models for enhanced student error analysis","authors":"Tianlong Xu, Yi-Fan Zhang, Zhendong Chu, Qingsong Wen","doi":"10.1002/aaai.70053","DOIUrl":"https://doi.org/10.1002/aaai.70053","url":null,"abstract":"<p>Students frequently make mistakes while solving mathematical problems, and traditional error correction methods are both time-consuming and labor-intensive. This paper introduces an innovative Virtual AI Teacher system (VATE) designed to autonomously analyze and correct student Errors. Leveraging advanced large language models (LLMs), the system utilizes student drafts as a primary source for error analysis, thereby enhancing the understanding of the student's learning process. It incorporates sophisticated prompt engineering and maintains an error pool to reduce computational overhead. The AI-driven system also features a real-time dialogue component for efficient student interaction. Our approach demonstrates significant advantages over traditional and machine learning-based error correction methods, including reduced educational costs, high scalability, and superior generalizability. The system has been deployed on the Squirrel AI learning platform for elementary mathematics education, where it achieves 78.3 accuracy in error analysis and shows a marked improvement in student learning efficiency. Satisfaction surveys indicate a strong positive reception, highlighting the system's potential to transform educational practices.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70053","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147566270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2026-03-16DOI: 10.1002/aaai.70057
Jun Wu
{"title":"Distribution shifts in trustworthy machine learning","authors":"Jun Wu","doi":"10.1002/aaai.70057","DOIUrl":"https://doi.org/10.1002/aaai.70057","url":null,"abstract":"<p>This article investigates the impact of distribution shifts in trustworthy machine learning. To this end, we start by summarizing fine-grained types of distribution shifts commonly studied in machine learning communities. To tackle distribution shifts across domains, we present our research across various learning scenarios by enforcing knowledge transferability and trustworthiness. Specifically, we focus on two learning paradigms to improve knowledge transferability: distribution-informed representation learning and distribution-guided information propagation. Besides, we also explore how trustworthiness properties of a learning algorithm are affected by distribution shifts across domains. Finally, we discuss the open questions and future directions for handling distribution shifts in the era of large language models.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70057","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147653365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2026-03-07DOI: 10.1002/aaai.70056
Wesley Brewer, Patrick Widener, Valentine Anantharaj, Feiyi Wang, Tom Beck, Arjun Shankar, Sarp Oral
{"title":"Data readiness pipeline patterns for scientific AI at scale: Insights from climate, fusion, life sciences, and materials","authors":"Wesley Brewer, Patrick Widener, Valentine Anantharaj, Feiyi Wang, Tom Beck, Arjun Shankar, Sarp Oral","doi":"10.1002/aaai.70056","DOIUrl":"https://doi.org/10.1002/aaai.70056","url":null,"abstract":"<p>This article examines how data readiness for AI principles apply to large scientific datasets used to train foundation models. We analyze archetypal workflows across four representative domains—climate, nuclear fusion, life sciences, and materials—to identify common preprocessing patterns and domain-specific constraints. We introduce a two-dimensional readiness model that combines canonical preprocessing patterns with a five-level operational readiness scale, both tailored to high-performance computing (HPC) environments. This construct helps outline key challenges in transforming large-scale scientific data into formats suitable for scalable AI training. Together, these dimensions form a conceptual maturity matrix that characterizes scientific data readiness and guides infrastructure development toward standardized, cross-domain support for scalable and reproducible AI for science. Finally, we evaluate this maturity matrix in the context of case studies including ClimaX (climate), AFLOW (materials), OpenFold (proteomics), and DIII-D fusion disruption-prediction workflows, from which we distill lessons learned and provide recommendations to guide practitioners in developing robust AI-readiness pipelines. Finally, we discuss remaining cross-cutting challenges that persist across scientific domains.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70056","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147653146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2026-03-03DOI: 10.1002/aaai.70055
Jiyoung Kim, Ki Han Kwon
{"title":"Flesh and code: The cinematic lineage of AI replacing humans from Maria to Cassandra","authors":"Jiyoung Kim, Ki Han Kwon","doi":"10.1002/aaai.70055","DOIUrl":"https://doi.org/10.1002/aaai.70055","url":null,"abstract":"<p>This study explores the evolving representation of Artificial Intelligence (AI) characters in media and its intersection with contemporary technological issues, focusing on the paradoxical human desire for emotional and creative replacement. By analyzing films such as the 2023 production M3GAN and the 2025 production Cassandra, the article examines how fictional AI has transitioned to substituted humans that occupy intimate familial and emotional roles. Furthermore, the research investigates AI arts through AI-generated films like 2018 production Zone Out and the recent AI-generated images trend on social media. It argues that while AI offers unprecedented efficiency and aesthetic refinement, it poses significant ethical challenges regarding digital sovereignty, copyright, and the commodification of human identity. Hence, the research emphasizes that the discourse on AI should shift from the technical prowess of mimicry to a normative re-evaluation of human agency, advocating for a creative paradigm that prioritizes the preservation of irreplaceable human values against the tide of unconscious substitution.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147653190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2026-02-21DOI: 10.1002/aaai.70054
Neil Majithia, Thomas Carey-Wilson, Elena Simperl, Nigel Shadbolt
{"title":"An actionable framework for AI-ready data","authors":"Neil Majithia, Thomas Carey-Wilson, Elena Simperl, Nigel Shadbolt","doi":"10.1002/aaai.70054","DOIUrl":"https://doi.org/10.1002/aaai.70054","url":null,"abstract":"<p>Data is the foundation of AI. Poor-quality data drive up costs and can lead to hidden problems for AI models, especially in complex fields such as healthcare and manufacturing. Meanwhile, biased data negatively affect the performance of AI models, and untested evaluation datasets can result in false positives or overestimates of model accuracy. For data publishers to realize their true potential in supporting the AI ecosystem and its impacts, they should take measures to ensure that their datasets support AI practitioners' needs; in other words, their data should be made AI-ready. In this article, we present a framework for data publishers to follow to make their datasets AI-ready. The framework provides specific, actionable guidance based on previous work and experience at the Open Data Institute and augmented with insights from literature and discussions with a range of experts. We first define AI-ready data before briefly discussing a selection of frameworks in the literature and where they are insufficient. We then provide a visual snapshot of our framework for AI-ready data, and a subsequent in-depth discussion of its criteria. Finally, we demonstrate the usage of our framework with a number of example datasets. We conclude by discussing the further steps that should be taken for the entire open data ecosystem to be made AI-ready in order to realize its true potential in supporting an innovative future.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70054","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147320923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2026-02-08DOI: 10.1002/aaai.70051
Pyry Pohjalainen, Juho Vepsäläinen
{"title":"Artificial intelligence for web development: Perspectives from the industry","authors":"Pyry Pohjalainen, Juho Vepsäläinen","doi":"10.1002/aaai.70051","DOIUrl":"https://doi.org/10.1002/aaai.70051","url":null,"abstract":"<p>As a field, web development is roughly 30 years old, and during this period, it has been transformed several times already as it has moved from static websites to dynamic web applications. Now, with the introduction of Artificial Intelligence (AI), the field is again at the cusp of a transformation as the latest AI tools might change how to develop for the web yet again. The objective of this study is to look into this phenomenon and understand how AI is changing web development. To achieve this task, we chose to use the sequential qualitative–quantitative design method that combines interviews with a survey to validate and expand our findings from the interviews. We found that AI is used by web developers to increase their development efficiency, as even the current tools are easy to use and access, although they come with several minor downsides, including AI not being able to understand complex logic, the need for validation of AI output, and suggested code that could potentially lead to security issues. While there are clear benefits to using AI tools for web development and AI proficiency is a vital skill for web developers, there are still open questions related to the quality of code produced by AI tools.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147268920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2026-02-08DOI: 10.1002/aaai.70052
Özkul Haraç, Ayhan Dolunay
{"title":"AI-driven perception management and political soft power: Insights from expert interviews","authors":"Özkul Haraç, Ayhan Dolunay","doi":"10.1002/aaai.70052","DOIUrl":"https://doi.org/10.1002/aaai.70052","url":null,"abstract":"<p>This study explores the role of artificial intelligence (AI) in perception management as an emerging tool of political soft power. Drawing on the theoretical frameworks of social psychology, strategic communication, and political communication, the research investigates how AI-assisted strategies influence public perception, image, and trust in the context of modern statecraft. The study adopts a qualitative design based on semi-structured interviews with 16 experts—eight from psychology and eight from communication fields—selected through snowball sampling. Data were analyzed using qualitative content analysis to identify recurring patterns and thematic structures. The findings reveal four central themes: (1) AI enhances efficiency and precision in perception campaigns, (2) trust and credibility remain critical yet vulnerable dimensions, (3) ethical and governance dilemmas emerge in AI-mediated communication, and (4) human oversight continues to be essential for maintaining legitimacy. The results suggest that while AI strengthens states’ capacity for strategic influence, overreliance without transparency may undermine the very trust it seeks to build. The study contributes to soft power and communication scholarship by providing expert-based evidence on the psychological and strategic mechanisms of AI-driven perception management. Policy recommendations are offered to promote transparency, accountability, and ethical oversight in AI-enabled diplomatic practices.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70052","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2026-01-31DOI: 10.1002/aaai.70050
Matthew Stewart, Yuke Zhang, Pete Warden, Yasmine Omri, Shvetank Prakash, Jacob Huckelberry, Joao Henrique Santos, Shawn Hymel, Benjamin Yeager Brown, Jim MacArthur, Nat Jeffries, Emanuel Moss, Mona Sloane, Brian Plancher, Vijay Janapa Reddi
{"title":"Datasheets for machine learning sensors","authors":"Matthew Stewart, Yuke Zhang, Pete Warden, Yasmine Omri, Shvetank Prakash, Jacob Huckelberry, Joao Henrique Santos, Shawn Hymel, Benjamin Yeager Brown, Jim MacArthur, Nat Jeffries, Emanuel Moss, Mona Sloane, Brian Plancher, Vijay Janapa Reddi","doi":"10.1002/aaai.70050","DOIUrl":"https://doi.org/10.1002/aaai.70050","url":null,"abstract":"<p>Machine learning (ML) is becoming prevalent in embedded AI sensing systems. These “ML sensors” enable context-sensitive, real-time data collection and decision-making across diverse applications ranging from anomaly detection in industrial settings to wildlife tracking for conservation efforts. As such, there is a need to provide transparency in the operation of such ML-enabled sensing systems through comprehensive documentation. This is needed to enable their reproducibility, to address new compliance and auditing regimes mandated in regulation and industry-specific policy, and to verify and validate the responsible nature of their operation. To address this gap, we introduce the datasheet for ML sensors framework. We provide a comprehensive template, collaboratively developed in academia—industry partnerships, that captures the distinct attributes of ML sensors, including hardware specifications, ML model and dataset characteristics, end-to-end performance metrics, and environmental impacts. Our framework addresses the continuous streaming nature of sensor data, real-time processing requirements, and embeds benchmarking methodologies that reflect real-world deployment conditions, ensuring practical viability. Aligned with the FAIR principles (Findability, Accessibility, Interoperability, and Reusability), our approach enhances the transparency and reusability of ML sensor documentation across academic, industrial, and regulatory domains. To show the application of our approach, we present two datasheets: the first for an open-source ML sensor designed in-house and the second for a commercial ML sensor developed by industry collaborators, both performing computer vision-based person detection.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70050","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2026-01-09DOI: 10.1002/aaai.70037
Erdem Bıyık
{"title":"Training robots with natural and lightweight human feedback","authors":"Erdem Bıyık","doi":"10.1002/aaai.70037","DOIUrl":"https://doi.org/10.1002/aaai.70037","url":null,"abstract":"<p>Generalist robot models promise broad applicability across domains but currently require extensive expert demonstrations for task specialization, which is a costly and impractical barrier for real-world deployment. In this article, which summarizes the author's presentation in the New Faculty Highlights Track of the 39<sup>th</sup> annual AAAI Conference on Artificial Intelligence, we present algorithms that enable non-expert users to adapt and continually improve robot policies through natural and lightweight feedback modalities, such as preference comparisons, rankings, ratings, natural language, and users' own demonstrations, combining them with active learning strategies to maximize data-efficiency. We further introduce methods for leveraging real-time human interventions as rich training signals, modeling both their timing and absence to refine policies continually. Our approaches achieve substantial gains in sample-efficiency, adaptability, and user-friendliness, demonstrated across simulated and real-world robotic tasks. By aligning robot learning with how humans naturally teach, we hope to move toward autonomous systems that are more personalized, capable, and deployable in everyday environments.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2025-12-31DOI: 10.1002/aaai.70048
Mark A. Musen, Martin J. O'Connor, Josef Hardi, Marcos Martínez-Romero
{"title":"Knowledge Engineering for Open Science: Building and Deploying Knowledge Bases for Metadata Standards","authors":"Mark A. Musen, Martin J. O'Connor, Josef Hardi, Marcos Martínez-Romero","doi":"10.1002/aaai.70048","DOIUrl":"https://doi.org/10.1002/aaai.70048","url":null,"abstract":"<p>For more than a decade, scientists have been striving to make their datasets available in open repositories, with the goal that they be findable, accessible, interoperable, and reusable (FAIR). Although it is hard for most investigators to remember all the “guiding principles” associated with FAIR data, there is one overarching requirement: The data need to be annotated with “rich,” discipline-specific, standardized metadata that can enable third parties to understand who performed the experiment, who or what the subjects were, what the experimental conditions were, and what the results appear to show. Most areas of science lack standards for such metadata and, when such standards exist, it can be difficult for investigators or data curators to apply them. The Center for Expanded Data Annotation and Retrieval (CEDAR) builds technology that enables scientists to encode descriptive metadata standards as <i>templates</i> that enumerate the attributes of different kinds of experiments and that link those attributes to ontologies or value sets that may supply controlled values for those attributes. These metadata templates capture the preferences of groups of investigators regarding how their data should be described and what a third party needs to know to make sense of their datasets. CEDAR templates describing community metadata preferences have been used to standardize metadata for a variety of scientific consortia. They have been used as the basis for data-annotation systems that acquire metadata through Web forms or through spreadsheets, and they can help correct metadata to ensure adherence to standards. Like the declarative knowledge bases that underpinned intelligent systems decades ago, CEDAR templates capture the knowledge of a community of practice in symbolic form, and they allow that knowledge to be applied in a variety of settings. They provide a mechanism for scientific communities to create shared metadata standards and to encode their preferences for the application of those standards, and for deploying those standards in a range of intelligent systems to promote open science.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70048","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145887954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}