Ai Magazine最新文献

筛选
英文 中文
AI for social science: A sociology PhD candidate's autoethnography on how LLMs are changing research work 社会科学的人工智能:一位社会学博士候选人关于法学硕士如何改变研究工作的自我民族志
IF 3.2 4区 计算机科学
Ai Magazine Pub Date : 2025-12-16 DOI: 10.1002/aaai.70046
Shuo Wang
{"title":"AI for social science: A sociology PhD candidate's autoethnography on how LLMs are changing research work","authors":"Shuo Wang","doi":"10.1002/aaai.70046","DOIUrl":"https://doi.org/10.1002/aaai.70046","url":null,"abstract":"<p>Will AI replace social scientists? The real issue concerns reshaping rather than replacement. Confronting the integration of large language models (LLMs) into academic training establishes “prompt engineering” as the core interface for collaboration, defining it as a method to translate sociological thinking into precise instructions. LLMs are becoming essential partners across the research spectrum. They transform qualitative analysis from a solitary craft into a dialogical coding process and assist in theoretical localization and the construction of localized measurement scales. Beyond text analysis, they provide a low-cost virtual testbed for experimental design through “silicon samples” and enable the deduction of complex social interactions via “generative agents.” In the quantitative realm, they act as translators connecting research intentions with statistical code. Ultimately, the core challenge facing researchers is not technical. It lies in proactively cultivating a critical “literacy for human-AI collaboration” to master this paradigm shift.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70046","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145824391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The ETHICAL Protocol for Responsible Use of Generative AI for Research Purposes in Higher Education 在高等教育中为研究目的负责任地使用生成人工智能的伦理协议
IF 3.2 4区 计算机科学
Ai Magazine Pub Date : 2025-12-12 DOI: 10.1002/aaai.70047
Ahmed Alduais, Saba Qadhi, Youmen Chaaban, Majeda Khraisheh
{"title":"The ETHICAL Protocol for Responsible Use of Generative AI for Research Purposes in Higher Education","authors":"Ahmed Alduais,&nbsp;Saba Qadhi,&nbsp;Youmen Chaaban,&nbsp;Majeda Khraisheh","doi":"10.1002/aaai.70047","DOIUrl":"https://doi.org/10.1002/aaai.70047","url":null,"abstract":"<p>Generative AI's growing use in higher education research requires strong protocols for responsible use. This need arises from the potential for misuse and the current uncertainty around ethical concerns and intellectual property. The lack of clear rules about openness in AI use, along with the “black box” nature of many AI systems, raises worries about reproducibility and the possibility of biased or fake results. This paper focuses specifically on generative AI tools (e.g., LLMs like ChatGPT, research-specific platforms like Elicit/SciSpace). The paper presents the ETHICAL protocol (i.e., <b>E</b>stablish your purpose, <b>T</b>horoughly explore options, <b>H</b>arness the appropriate tool, <b>I</b>nspect and verify output, <b>C</b>ite and reference accurately, <b>A</b>cknowledge AI usage transparently, and <b>L</b>ook over publisher's guidelines), a detailed guide designed to direct researchers in the ethical and responsible inclusion of generative AI in their work. The protocol was created through a multi-step process, including a scientometric review of current trends, a systematic review of researcher experiences, and a policy analysis of 74 documents from various stakeholders (authorities, universities, publishers, and publication manuals). This analysis shaped the creation of a seven-heading, nine-item checklist covering key aspects of responsible AI use, from setting clear research goals to checking outputs and openly acknowledging AI help. The ETHICAL protocol gives practical examples and detailed explanations for each item, highlighting the importance of AI literacy and careful choice of suitable tools. It also stresses the vital need for checking AI-generated content to lessen the risk of errors and made-up information (“hallucinations”). The resulting protocol offers a practical and easy-to-use guide for researchers, encouraging responsible AI practices and upholding academic integrity. The ETHICAL protocol offers a helpful tool for managing the complex area of AI in research, ultimately leading to more open, reliable, and ethically sound scholarly work. Its broad acceptance could greatly improve the responsible use of AI in higher education, building trust and furthering knowledge growth.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70047","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145739767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Proportional Representation Can Shape Artificial Intelligence 比例代表制如何影响人工智能
IF 3.2 4区 计算机科学
Ai Magazine Pub Date : 2025-12-03 DOI: 10.1002/aaai.70044
Evi Micha
{"title":"How Proportional Representation Can Shape Artificial Intelligence","authors":"Evi Micha","doi":"10.1002/aaai.70044","DOIUrl":"https://doi.org/10.1002/aaai.70044","url":null,"abstract":"<p>Proportional representation is a foundational principle in social choice theory, ensuring that groups influence collective decisions in proportion to their size. While it has traditionally been studied in the context of political elections, recent work in computational social choice has broadened its scope to a variety of voting frameworks. This article showcases how proportional representation can be formalized and applied beyond these frameworks, spotlighting AI domains where it naturally takes shape. In particular, we focus on two such domains: clustering and AI alignment. In clustering, proportionality ensures that sufficiently large and cohesive groups of data points or agents are adequately represented in the selection of cluster centers or group assignments, to both centroid-based and noncentroid-based paradigms. In AI alignment, particularly in reinforcement learning from human feedback (RLHF), proportionality provides a principled framework for aggregating heterogeneous preferences by designing committees of reward functions that reflect annotators' viewpoints in proportion to their prevalence. We also discuss additional promising applications, including client selection in federated learning and forming committees of pre-trained models in meta-learning, and argue that incorporating proportional representation into AI systems provides a mathematically rigorous foundation for aligning algorithmic outcomes with the breadth of human viewpoints.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145686395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human at the Center: A Framework for Human-Driven AI Development 以人为中心:人类驱动的人工智能发展框架
IF 3.2 4区 计算机科学
Ai Magazine Pub Date : 2025-12-03 DOI: 10.1002/aaai.70043
Danniell Hu, Diana Acosta Navas, Susanne Gaube, Hussein Mozannar, Matthew E. Taylor, Krishnamurthy Dvijotham, Elizabeth Bondi-Kelly
{"title":"Human at the Center: A Framework for Human-Driven AI Development","authors":"Danniell Hu,&nbsp;Diana Acosta Navas,&nbsp;Susanne Gaube,&nbsp;Hussein Mozannar,&nbsp;Matthew E. Taylor,&nbsp;Krishnamurthy Dvijotham,&nbsp;Elizabeth Bondi-Kelly","doi":"10.1002/aaai.70043","DOIUrl":"https://doi.org/10.1002/aaai.70043","url":null,"abstract":"<p>Artificial Intelligence (AI) systems increasingly shape many aspects of daily life, influencing our jobs, finances, healthcare, and online content. This expansion has led to the rise of human–AI systems, where humans communicate, collaborate, or otherwise interact with AI, such as using AI outputs to make decisions. While these systems have shown potential to enhance human capabilities and improve performance on benchmarks, evidence suggests that they often underperform compared to AI-only or human-only approaches in experiments and real-world applications. Here, we argue that human–AI systems should be developed with a greater emphasis on human-centered factors—such as usability, fairness, trust, and user autonomy—within the algorithmic design and evaluation process. We advocate for integrating human-centered principles into AI development through human-centered algorithmic design and contextual evaluation with real users. Drawing on interdisciplinary research and our tutorial at two major AI conferences, we highlight examples and strategies for AI researchers and practitioners to embed these principles effectively. This work offers a systematic synthesis that integrates technical, practical, and ethical insights into a unified framework. Additionally, we highlight critical ethical considerations, including fairness, labor, privacy, and human agency to ensure that systems meet performance goals while serving broader societal interests. Through this work, we aim to inspire the field to embrace a truly human-centered approach to algorithmic design and deployment.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145686394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foundation Models for Anomaly Detection: Vision and Challenges 异常检测的基础模型:愿景与挑战
IF 3.2 4区 计算机科学
Ai Magazine Pub Date : 2025-12-02 DOI: 10.1002/aaai.70045
Jing Ren, Tao Tang, Hong Jia, Ziqi Xu, Haytham Fayek, Xiaodong Li, Suyu Ma, Xiwei Xu, Feng Xia
{"title":"Foundation Models for Anomaly Detection: Vision and Challenges","authors":"Jing Ren,&nbsp;Tao Tang,&nbsp;Hong Jia,&nbsp;Ziqi Xu,&nbsp;Haytham Fayek,&nbsp;Xiaodong Li,&nbsp;Suyu Ma,&nbsp;Xiwei Xu,&nbsp;Feng Xia","doi":"10.1002/aaai.70045","DOIUrl":"https://doi.org/10.1002/aaai.70045","url":null,"abstract":"<p>As data continues to grow in volume and complexity across domains such as finance, manufacturing, and healthcare, effective anomaly detection is essential for identifying irregular patterns that may signal critical issues. Recently, foundation models (FMs) have emerged as a powerful tool for advancing anomaly detection. They have demonstrated unprecedented capabilities in enhancing anomaly identification, generating detailed data descriptions, and providing visual explanations. This survey presents the first comprehensive review of recent advancements in FM-based anomaly detection. We propose a novel taxonomy for classifying FMs in anomaly detection tasks, based on the roles they play within the detection pipeline. Specifically, we categorize FMs as encoders, detectors, or interpreters, reflecting whether they are used for feature extraction, anomaly detection, or result interpretation, respectively. We provide a systematic analysis of state-of-the-art methods and discuss key challenges in leveraging FMs for improved anomaly detection. We also outline future research directions in this rapidly evolving field.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70045","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145695122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imitation learning is probably existentially safe 模仿学习可能是存在安全的
IF 3.2 4区 计算机科学
Ai Magazine Pub Date : 2025-11-21 DOI: 10.1002/aaai.70040
Michael K. Cohen, Marcus Hutter
{"title":"Imitation learning is probably existentially safe","authors":"Michael K. Cohen,&nbsp;Marcus Hutter","doi":"10.1002/aaai.70040","DOIUrl":"https://doi.org/10.1002/aaai.70040","url":null,"abstract":"<p>Concerns about extinction risk from AI vary among experts in the field. However, AI encompasses a very broad category of algorithms. Perhaps some algorithms would pose an extinction risk, and others would not. Such an observation might be of great interest to both regulators and innovators. This paper argues that advanced imitation learners would likely <i>not</i> cause human extinction. We first present a simple argument to that effect, and then we rebut six different arguments that have been made to the contrary. A common theme of most of these arguments is a story for how a subroutine within an advanced imitation learner could hijack the imitation learner's behavior toward its own ends. However, we argue that each argument is flawed and each story implausible.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145581240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What do we want from AI? 我们想从人工智能中得到什么?
IF 3.2 4区 计算机科学
Ai Magazine Pub Date : 2025-11-08 DOI: 10.1002/aaai.70042
Robert B. Fisher
{"title":"What do we want from AI?","authors":"Robert B. Fisher","doi":"10.1002/aaai.70042","DOIUrl":"https://doi.org/10.1002/aaai.70042","url":null,"abstract":"<p>Most recent writing about the directions for AI has focused on the potential risks of widespread use of AI and what we DO NOT want from AI. This has led to many, largely ignored, calls for a pause in research and deployment. This essay takes the view that there are too many factors in play to slow the deployment much and for long. Hence, instead, this paper looks at what we DO want from AI (18 principles or goals) and how to get there.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145521580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RevOrder: A novel equation format for arithmetic operations in language models RevOrder:语言模型中用于算术运算的新型方程格式
IF 3.2 4区 计算机科学
Ai Magazine Pub Date : 2025-11-06 DOI: 10.1002/aaai.70038
Si Shen, Peijun Shen, Danhao Zhu
{"title":"RevOrder: A novel equation format for arithmetic operations in language models","authors":"Si Shen,&nbsp;Peijun Shen,&nbsp;Danhao Zhu","doi":"10.1002/aaai.70038","DOIUrl":"https://doi.org/10.1002/aaai.70038","url":null,"abstract":"<p>This paper proposes to understand arithmetic operations in Language Models (LM) by framing them as digit-based reasoning challenges. Our research focuses on arithmetic optimization challenges specific to LLMs, not on solving mathematical word problems. We introduce a metric called the Count of Sequential Intermediate Digits (CSID), which measures the complexity of arithmetic equations by counting the missing steps in digit reasoning. Our empirical findings suggest that increasing the model size does little to improve the handling of equations with high CSID values.</p><p>We propose RevOrder, a method that incorporates techniques such as reversing the output order, step-by-step decomposition, and rollback mechanisms to maintain a low CSID, thereby enhancing the solvability of arithmetic equations in LMs. RevOrder also introduces a more compact reasoning process, which reduces the token requirements without affecting the CSID, significantly enhancing token efficiency.</p><p>Comprehensive testing shows that RevOrder achieves perfect accuracy in operations such as addition, subtraction, and multiplication, and substantially improves performance in division tasks, especially with large numbers where traditional models falter.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145469655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence and the impact of the EU AI Act in business organizations 人工智能和欧盟人工智能法案对商业组织的影响
IF 3.2 4区 计算机科学
Ai Magazine Pub Date : 2025-11-04 DOI: 10.1002/aaai.70039
Marc Selgas Cors, Renata Thiébaut
{"title":"Artificial intelligence and the impact of the EU AI Act in business organizations","authors":"Marc Selgas Cors,&nbsp;Renata Thiébaut","doi":"10.1002/aaai.70039","DOIUrl":"https://doi.org/10.1002/aaai.70039","url":null,"abstract":"<p>Artificial intelligence (AI) is transforming industries worldwide, and the e-commerce sector is at the forefront of leveraging its capabilities to drive innovation and efficiency. The paper explores the integration of artificial intelligence in e-commerce, focusing on the ethical and regulatory implications introduced by the EU AI Act. This legislative framework aims to ensure the responsible deployment of AI by classifying AI systems into risk categories and imposing compliance requirements. It also underscores both the opportunities and challenges that AI presents to businesses, particularly in enhancing consumer experiences through automation and data-driven decision-making processes. The paper provides a comprehensive review of the AI landscape in Europe, analyzing the impact of the EU AI Act, particularly on small and medium-sized enterprises and startups. Through a mixed-methods approach, the study investigates how regulatory compliance may influence business innovation, market competitiveness, and consumer trust. The recommendations proposed aim to develop a trustworthy AI ecosystem that could stimulate long-term growth and enhance the global positioning of small European businesses.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70039","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145469537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From rights to runtime: Privacy engineering for agentic AI 从权利到运行:代理AI的隐私工程
IF 3.2 4区 计算机科学
Ai Magazine Pub Date : 2025-10-26 DOI: 10.1002/aaai.70036
Keivan Navaie
{"title":"From rights to runtime: Privacy engineering for agentic AI","authors":"Keivan Navaie","doi":"10.1002/aaai.70036","DOIUrl":"https://doi.org/10.1002/aaai.70036","url":null,"abstract":"<p>Agentic AI shifts stacks from request-response to plan-execute. Systems no longer just answer; they act—planning tasks, calling tools, keeping memory, and changing external state. That shift moves privacy from policy docs into the runtime. This opinion piece argues that we do not need a new privacy theory for agents; we need enforceable, observable controls that render existing rights as product behavior. Anchoring on GDPR—with portable touchpoints to CPRA, LGPD, and PDPA, we propose a developer-first toolkit: optional, bounded, user-visible memory; a purpose-aware egress gate that enforces minimization and transfer rules; proportional safeguards that scale with stakes; and traces that tell a coherent story across components and suppliers. We show how the EU AI Act's risk management, logging, and oversight can scaffold these controls and enable evidence reuse. The result is an agentic runtime that keeps people in control and teams audit-ready by design.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70036","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145367007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书