多利益相关者视角下的负责任人工智能及其在教育中的可接受性。

IF 3 1区 心理学 Q1 EDUCATION & EDUCATIONAL RESEARCH
Alexander John Karran, Patrick Charland, Joé Trempe-Martineau, Ana Ortiz de Guinea Lopez de Arana, Anne-Marie Lesage, Sylvain Sénécal, Pierre-Majorique Léger
{"title":"多利益相关者视角下的负责任人工智能及其在教育中的可接受性。","authors":"Alexander John Karran, Patrick Charland, Joé Trempe-Martineau, Ana Ortiz de Guinea Lopez de Arana, Anne-Marie Lesage, Sylvain Sénécal, Pierre-Majorique Léger","doi":"10.1038/s41539-025-00333-2","DOIUrl":null,"url":null,"abstract":"<p><p>Recognising a need to investigate the concerns and barriers to the acceptance of artificial intelligence (AI) in education, this study explores the acceptability of different AI applications in education from a multi-stakeholder perspective, including students, teachers, and parents. Acknowledging the transformative potential of AI, it addresses concerns related to data privacy, AI agency, transparency, explainability, and ethical deployment of AI. Using a vignette methodology, participants were presented with four scenarios where AI agency, transparency, explainability, and privacy were manipulated. After each scenario, participants completed a survey that captured their perceptions of AI's global utility, individual usefulness, justice, confidence, risk, and intention to use each scenario's AI if it was available. The data collection, comprising a final sample of 1198 participants, focused on individual responses to four AI use cases. A mediation analysis of the data indicated that acceptance and trust in AI vary significantly across stakeholder groups and AI applications.</p>","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":"10 1","pages":"44"},"PeriodicalIF":3.0000,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12238224/pdf/","citationCount":"0","resultStr":"{\"title\":\"Multi-stakeholder perspective on responsible artificial intelligence and acceptability in education.\",\"authors\":\"Alexander John Karran, Patrick Charland, Joé Trempe-Martineau, Ana Ortiz de Guinea Lopez de Arana, Anne-Marie Lesage, Sylvain Sénécal, Pierre-Majorique Léger\",\"doi\":\"10.1038/s41539-025-00333-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recognising a need to investigate the concerns and barriers to the acceptance of artificial intelligence (AI) in education, this study explores the acceptability of different AI applications in education from a multi-stakeholder perspective, including students, teachers, and parents. Acknowledging the transformative potential of AI, it addresses concerns related to data privacy, AI agency, transparency, explainability, and ethical deployment of AI. Using a vignette methodology, participants were presented with four scenarios where AI agency, transparency, explainability, and privacy were manipulated. After each scenario, participants completed a survey that captured their perceptions of AI's global utility, individual usefulness, justice, confidence, risk, and intention to use each scenario's AI if it was available. The data collection, comprising a final sample of 1198 participants, focused on individual responses to four AI use cases. A mediation analysis of the data indicated that acceptance and trust in AI vary significantly across stakeholder groups and AI applications.</p>\",\"PeriodicalId\":48503,\"journal\":{\"name\":\"npj Science of Learning\",\"volume\":\"10 1\",\"pages\":\"44\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12238224/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"npj Science of Learning\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1038/s41539-025-00333-2\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"npj Science of Learning","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1038/s41539-025-00333-2","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

认识到有必要调查人工智能(AI)在教育中被接受的问题和障碍,本研究从多方利益相关者的角度(包括学生、教师和家长)探讨了不同人工智能应用在教育中的可接受性。它承认人工智能的变革潜力,解决了与数据隐私、人工智能代理、透明度、可解释性和人工智能伦理部署相关的问题。使用小插图方法,向参与者展示了四个场景,其中人工智能代理,透明度,可解释性和隐私被操纵。在每个场景之后,参与者完成了一项调查,调查内容包括他们对人工智能的全球效用、个人有用性、正义、信心、风险的看法,以及在每个场景可用的情况下使用人工智能的意愿。数据收集包括1198名参与者的最终样本,重点关注对四个人工智能用例的个人反应。对数据的中介分析表明,利益相关者群体和人工智能应用对人工智能的接受和信任存在显著差异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Multi-stakeholder perspective on responsible artificial intelligence and acceptability in education.

Multi-stakeholder perspective on responsible artificial intelligence and acceptability in education.

Multi-stakeholder perspective on responsible artificial intelligence and acceptability in education.

Multi-stakeholder perspective on responsible artificial intelligence and acceptability in education.

Recognising a need to investigate the concerns and barriers to the acceptance of artificial intelligence (AI) in education, this study explores the acceptability of different AI applications in education from a multi-stakeholder perspective, including students, teachers, and parents. Acknowledging the transformative potential of AI, it addresses concerns related to data privacy, AI agency, transparency, explainability, and ethical deployment of AI. Using a vignette methodology, participants were presented with four scenarios where AI agency, transparency, explainability, and privacy were manipulated. After each scenario, participants completed a survey that captured their perceptions of AI's global utility, individual usefulness, justice, confidence, risk, and intention to use each scenario's AI if it was available. The data collection, comprising a final sample of 1198 participants, focused on individual responses to four AI use cases. A mediation analysis of the data indicated that acceptance and trust in AI vary significantly across stakeholder groups and AI applications.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.40
自引率
7.10%
发文量
29
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信