Alexander John Karran, Patrick Charland, Joé Trempe-Martineau, Ana Ortiz de Guinea Lopez de Arana, Anne-Marie Lesage, Sylvain Sénécal, Pierre-Majorique Léger
{"title":"多利益相关者视角下的负责任人工智能及其在教育中的可接受性。","authors":"Alexander John Karran, Patrick Charland, Joé Trempe-Martineau, Ana Ortiz de Guinea Lopez de Arana, Anne-Marie Lesage, Sylvain Sénécal, Pierre-Majorique Léger","doi":"10.1038/s41539-025-00333-2","DOIUrl":null,"url":null,"abstract":"<p><p>Recognising a need to investigate the concerns and barriers to the acceptance of artificial intelligence (AI) in education, this study explores the acceptability of different AI applications in education from a multi-stakeholder perspective, including students, teachers, and parents. Acknowledging the transformative potential of AI, it addresses concerns related to data privacy, AI agency, transparency, explainability, and ethical deployment of AI. Using a vignette methodology, participants were presented with four scenarios where AI agency, transparency, explainability, and privacy were manipulated. After each scenario, participants completed a survey that captured their perceptions of AI's global utility, individual usefulness, justice, confidence, risk, and intention to use each scenario's AI if it was available. The data collection, comprising a final sample of 1198 participants, focused on individual responses to four AI use cases. A mediation analysis of the data indicated that acceptance and trust in AI vary significantly across stakeholder groups and AI applications.</p>","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":"10 1","pages":"44"},"PeriodicalIF":3.0000,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12238224/pdf/","citationCount":"0","resultStr":"{\"title\":\"Multi-stakeholder perspective on responsible artificial intelligence and acceptability in education.\",\"authors\":\"Alexander John Karran, Patrick Charland, Joé Trempe-Martineau, Ana Ortiz de Guinea Lopez de Arana, Anne-Marie Lesage, Sylvain Sénécal, Pierre-Majorique Léger\",\"doi\":\"10.1038/s41539-025-00333-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recognising a need to investigate the concerns and barriers to the acceptance of artificial intelligence (AI) in education, this study explores the acceptability of different AI applications in education from a multi-stakeholder perspective, including students, teachers, and parents. Acknowledging the transformative potential of AI, it addresses concerns related to data privacy, AI agency, transparency, explainability, and ethical deployment of AI. Using a vignette methodology, participants were presented with four scenarios where AI agency, transparency, explainability, and privacy were manipulated. After each scenario, participants completed a survey that captured their perceptions of AI's global utility, individual usefulness, justice, confidence, risk, and intention to use each scenario's AI if it was available. The data collection, comprising a final sample of 1198 participants, focused on individual responses to four AI use cases. A mediation analysis of the data indicated that acceptance and trust in AI vary significantly across stakeholder groups and AI applications.</p>\",\"PeriodicalId\":48503,\"journal\":{\"name\":\"npj Science of Learning\",\"volume\":\"10 1\",\"pages\":\"44\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12238224/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"npj Science of Learning\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1038/s41539-025-00333-2\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"npj Science of Learning","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1038/s41539-025-00333-2","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Multi-stakeholder perspective on responsible artificial intelligence and acceptability in education.
Recognising a need to investigate the concerns and barriers to the acceptance of artificial intelligence (AI) in education, this study explores the acceptability of different AI applications in education from a multi-stakeholder perspective, including students, teachers, and parents. Acknowledging the transformative potential of AI, it addresses concerns related to data privacy, AI agency, transparency, explainability, and ethical deployment of AI. Using a vignette methodology, participants were presented with four scenarios where AI agency, transparency, explainability, and privacy were manipulated. After each scenario, participants completed a survey that captured their perceptions of AI's global utility, individual usefulness, justice, confidence, risk, and intention to use each scenario's AI if it was available. The data collection, comprising a final sample of 1198 participants, focused on individual responses to four AI use cases. A mediation analysis of the data indicated that acceptance and trust in AI vary significantly across stakeholder groups and AI applications.